abstract
stringlengths 1
4.43k
| claims
stringlengths 14
189k
| description
stringlengths 5
1.46M
|
---|---|---|
Disclosed are methods, apparatus, and systems, including computer program products, implementing and using techniques for processing frames of video data sent across a display interface using a block-based encoding scheme and a tag ID. The disclosed techniques provide for optimization of the display interface situated between the graphics processor and the display controller of an electronic device. The disclosed techniques minimize the amount of signaling over the interface and reduce the power consumed at the interface. Accordingly, the battery life of some electronic devices can be extended. In one embodiment, the graphics processor is configured to receive frames of video data, where each frame includes one or more blocks of the video data. The graphics processor is configured to encode each block of video data, generate a tag ID associated with each encoded block of video data, and output each encoded block of video data and associated tag ID. The display controller is configured to receive the encoded blocks of video data and associated tag ID's from the graphics processor via the display interface. The display controller is configured to interpret the tag ID associated with a respective encoded block of video data and determine whether to decode at least part of the respective encoded block of video data according to the tag ID. A display, such as a memory-based display, is in communication with the display controller. The display is configured to receive and display decoded blocks of video data from the display controller. |
WHAT IS CLAIMED IS:1. An apparatus comprising:a graphics processor configured to receive frames of video data, each frame including a plurality of blocks of the video data, the graphics processor configured to: i) encode each block of video data, and ii) generate a tag ID associated with each encoded block of video data, the graphics processor configured to output each encoded block of video data and associated tag ID;a display interface in communication with the graphics processor;a display controller in communication with the display interface, the display controller configured to receive the encoded blocks of video data and associated tag ID's from the graphics processor via the display interface, the display controller configured to: i) interpret the tag ID associated with a respective encoded block of video data, and ii) determine whether to decode at least part of the respective encoded block of video data according to the tag ID; anda display in communication with the display controller, the display configured to receive decoded blocks of video data from the display controller, the display configured to display the decoded blocks of video data.2. The apparatus of claim 1, the tag ID including one or more indications selected from the group consisting of: a start of a new frame of video data, a redundant frame of video data, a start of a new block of video data, and a redundant block of video data.3. The apparatus of claim 1, the display controller configured to decode the encoded block of video data if the tag ID indicates a start of a new block of video data.4. The apparatus of claim 1, the display controller configured to disregard the encoded block of video data if the tag ID indicates a start of a redundant block of video data. 5. The apparatus of claim 4, the display controller configured to output a previous decoded block of video data.6. The apparatus of claim 1 further comprising:a memory in communication with the display controller, the memory capable of storing the decoded blocks of video data. 7. The apparatus of claim 1, the graphics processor configured to encode the blocks of video data using a Run Length Encoding (RLE) process.8. The apparatus of claim 1, the graphics processor configured to encode the blocks of video data using an Arithmetic Coding (AC) process.9. The apparatus of claim 1, the graphics processor configured to encode the blocks of video data using a Huffman Coding (HC) process.10. The apparatus of claim 1, the display being a memory display.11. The apparatus of claim 10, the memory display being a bi-stable display.12. The apparatus of claim 11, the bi-stable display being one selected from the group consisting of: an interferometric modulation display (IMOD), a cholesteric liquid crystal display (ChLCD), and an electrophoretic display.13. The apparatus of claim 1, the frames of video data being stored in one or more frame buffers, the graphics processor configured to receive the frames of video data from the one or more frame buffers.14. The apparatus of claim 1, the display interface configured to pass the encoded blocks of video data using a standard selected from the group consisting of: the Mobile Industry Processor Interface (MIPI) standard, the Mobile Display Digital Interface (MDDI) standard, the Low- Voltage Differential Signaling (LVDS) standard, and the High-Definition Multimedia Interface (HDMI) standard.15. The apparatus of claim 1 further comprising:an encoder configured to encode each block of video data.16. The apparatus of claim 1 further comprising:a decoder configured to decode the at least part of the respective encoded block of video data according to the tag ID. 17. The apparatus of claim 1 further comprising:a tag ID generator configured to generate the tag ID associated with each encoded block of video data.18. The apparatus of claim 1 further comprising:a tag ID reader configured to interpret the tag ID associated with the respective encoded block of video data.19. The apparatus of claim 1, the graphics processor configured to output a packet including a respective encoded block of video data and associated tag ID.20. The apparatus of claim 19, the associated tag ID located at a beginning of the packet.21. The apparatus of claim 19, the packet further including an indication of a number of bytes of data.22. The apparatus of claim 1 further comprising:a driver circuit configured to send at least one signal comprising the decoded blocks of video data to the display.23. The apparatus of claim 22, the display controller configured to send the decoded blocks of video data to the driver circuit.24. The apparatus of claim 1 further comprising:an image source module configured to send the frames of video data to the graphics processor.25. The apparatus of claim 24, the image source module comprising at least one of a receiver, a transceiver, and a transmitter.26. The apparatus of claim 1 further comprising:an input device configured to receive input data and to communicate the input data to a controller. 27. The apparatus of claim 1, the graphics processor situated in a server device.28. The apparatus of claim 1, the display controller situated in a client device.29. The apparatus of claim 28, the display situated in the client device.30. A method comprising:receiving frames of video data at a graphics processor, each frame including a plurality of blocks of the video data;encoding each block of video data;generating a tag ID associated with each encoded block of video data;providing each encoded block of video data and associated tag ID from the graphics processor to a display interface in communication with the graphics processor;receiving the encoded blocks of video data and associated tag ID 's at a display controller in communication with the display interface;interpreting the tag ID associated with a respective encoded block of video data;determining whether to decode at least part of the respective encoded block of video data according to the tag ID; andproviding decoded blocks of video data from the display controller to a display in communication with the display controller, the display configured to display the decoded blocks of video data.31. The method of claim 30 further comprising:decoding the encoded block of video data if the tag ID indicates a start of a new block of video data.32. The method of claim 30 further comprising:disregarding the encoded block of video data if the tag ID indicates a start of a redundant block of video data.33. The method of claim 30 further comprising:outputting a previous decoded block of video data if the tag ID indicates a start of a redundant block of video data.34. The method of claim 30, encoding each block of video data comprising:encoding the block of video data using a Run Length Encoding (RLE) process.35. The method of claim 30, providing each encoded block of video data and associated tag ID from the graphics processor to the display interface comprising: outputting a packet including a respective encoded block of video data and associated tag ID.36. The method of claim 30, generating the tag ID associated with each encoded block of video data comprising:performing a compare operation between successive blocks of video data.37. The method of claim 30, the graphics processor situated in a server device.38. The method of claim 30, the display controller situated in a client device.39. An apparatus comprising:graphics processor means for receiving frames of video data, each frame including a plurality of blocks of the video data, and i) encoding each block of video data, and ii) generating a tag ID associated with each encoded block of video data, and outputting each encoded block of video data and associated tag ID;display interface means in communication with the graphics processor means; display controller means in communication with the display interface means, the display controller means for receiving the encoded blocks of video data and associated tag ID's from the graphics processor means via the display interface means, and: i) interpreting the tag ID associated with a respective encoded block of video data, and ii) determining whether to decode at least part of the respective encoded block of video data according to the tag ID; anddisplay means in communication with the display controller means, the display means for receiving decoded blocks of video data from the display controller means and displaying the decoded blocks of video data.40. The apparatus of claim 39, the graphics processor situated in a server device.41. The apparatus of claim 39, the display controller situated in a client device.42. The apparatus of claim 41, the display situated in the client device.43. A method comprising:receiving frames of video data at a graphics processor, each frame including a plurality of blocks of the video data;encoding each block of video data;generating a tag ID associated with each encoded block of video data; and providing each encoded block of video data and associated tag ID from the graphics processor to a display interface in communication with the graphics processor.44. The method of claim 43, the graphics processor situated in a server device.45. The method of claim 43, the tag ID including one or more indications selected from the group consisting of: a start of a new frame of video data, a redundant frame of video data, a start of a new block of video data, and a redundant block of video data.46. The method of claim 43, providing each encoded block of video data and associated tag ID from the graphics processor to the display interface comprising: outputting a packet including a respective encoded block of video data and associated tag ID.47. The method of claim 43, generating the tag ID associated with each encoded block of video data comprising:performing a compare operation between successive blocks of video data.48. A method comprising:receiving encoded blocks of video data and tag ID 's at a display controller from a display interface, each of the encoded blocks having a respective associated tag ID;interpreting the tag ID associated with a respective encoded block of video data;determining whether to decode at least part of the respective encoded block of video data according to the tag ID; andproviding decoded blocks of video data from the display controller to a display in communication with the display controller, the display configured to display the decoded blocks of video data.49. The method of claim 48, the display controller situated in a client device.50. The method of claim 48 further comprising:decoding the encoded block of video data if the tag ID indicates a start of a new block of video data.51. The method of claim 48 further comprising:disregarding the encoded block of video data if the tag ID indicates a start of a redundant block of video data.52. The method of claim 48 further comprising:outputting a previous decoded block of video data if the tag ID indicates a start of a redundant block of video data. |
APPARATUS AND METHODS FOR PROCESSING FRAMES OF VIDEO DATA UPON TRANSMISSION ACROSS A DISPLAY INTERFACE USING A BLOCK- BASED ENCODING SCHEME AND A TAG IDCROSS-REFERENCE TO RELATED APPLICATIONS[0001] This application claims the benefit of U.S. Patent Application No: 12/820,838, filed June 22, 2010, which is hereby incorporated by reference in it entirety.FIELD [0002] This application relates generally to display technology and more specifically to circuitry for controlling displays.DESCRIPTION OF RELATED TECHNOLOGY[0003] Power consumption is a concern with modern electronic devices, particularly portable handheld devices. Battery-powered cell phones and wireless electronic reading devices incorporating conventional display technologies require frequent re-charging of the batteries, in some cases, several times in a single day. The need to constantly re-charge such devices interferes with their fundamental purpose, that is, to allow a user to continue using them (i.e., not be interrupted to have to recharge them) as the user moves from place-to-place throughout the day.[0004] A significant amount of power, often the majority of power, is consumed by the displays in many modern portable electronic devices for certain applications. Currently, the majority of displays used on mobile devices are Liquid Crystal Displays (LCDs), which require continuous updates of video data to maintain the video output on the display. Electronic reading devices with bi-stable displays do not require continuous updates but still consume an unacceptable amount of power. The power across a display interface tends to be high, particularly for larger displays. Indeed, the power required by active display interfaces in modern devices is growing rapidly, particularly as display resolutions increase for these devices. The power consumed by the display interface is generally proportional to the square of the switching voltage, the frequency of the display data, and the capacitance of the interconnect lines of the interface.[0005] Thus, an overall concern with modern electronic devices isconservation of power used to drive the displays.SUMMARY[0006] Disclosed are methods, apparatus, and systems, implementing and using techniques for processing frames of video data sent across a display interface using a block-based encoding scheme and tag ID's.[0007] Some aspects of the present application incorporate techniques which cooperate with a host element, often in the form of a graphics processor or controller, and a display element, often in the form of a display controller which drives a display. A display interface connects the graphics processor with the display controller. The disclosed apparatus and methods provide for the compression of video data at the host element, before it is sent across the display interface, and then the de-compression of this data at the display element.[0008] The display interface is traditionally viewed as a physical layer or connection between the host element and the display element. Some aspects of the present application are based on a logical view of the display interface. Logical operations can be performed to organize and transmit the data across the display interface. These operations are applicable to various physical interfaces and connections. Regardless of the physical nature of the display interface layer, applying techniques disclosed herein, video data can be encoded on the graphics processor side of the interface and selectively decoded at the display controller side after it is sent across the interface. The decoded data is, accordingly, selectively output from the display controller to the display.[0009] Some aspects of the present application provide for optimization of the display interface, situated between the graphics processor and the display controller of an electronic device. The optimization techniques described herein minimize the amount of signaling over the interface and reduce the power consumed at the interface. Accordingly, the battery life of some electronic devices can be extended. [0010] According to one aspect of the present application, an apparatus comprises a graphics processor configured to receive frames of video data. Each frame includes one or more blocks of the video data. The graphics processor is configured to encode each block of video data and generate a tag ID associated with each encoded block of video data. The graphics processor is configured to output each encoded block of video data and associated tag ID. A display interface is in communication with the graphics processor. A display controller is in communication with the display interface. The display controller is configured to receive the encoded blocks of video data and associated tag ID's from the graphics processor via the display interface. The display controller is configured to interpret the tag ID associated with a respective encoded block of video data and determine whether to decode at least part of the respective encoded block of video data according to the tag ID. A display, such as a memory-based display, is in communication with the display controller. The display is configured to receive decoded blocks of video data from the display controller and to display the decoded blocks of video data.[0011] According to one implementation, the tag ID can include one or more indications such as: a start of a new frame of video data, a redundant frame of video data, a start of a new block of video data, and a redundant block of video data. For instance, the display controller can be configured to disregard the encoded block of video data if the tag ID indicates a start of a redundant block of video data.[0012] Depending on the desired implementation, the graphics processor can be configured to encode the blocks of video data using processing techniques such as Run Length Encoding (RLE), Arithmetic Coding (AC), or Huffman Coding (HC).[0013] According to one implementation, the display is a bi-stable display such as: an interferometric modulation display (IMOD), a cholesteric liquid crystal display (ChLCD), or an electrophoretic display.[0014] Depending on the desired implementation, the display interface can be configured to pass the encoded blocks of video data using a standard such as: the Mobile Industry Processor Interface (MIPI) standard, the Mobile Display Digital Interface (MDDI) standard, the Low- Voltage Differential Signaling (LVDS) standard, or the High-Definition Multimedia Interface (HDMI) standard. [0015] Another aspect of the present application relates to a method in which each block of video data is encoded. Tag ID's associated with each encoded block of video data are generated. For instance, the tag ID can be generated by performing a compare operation between successive blocks of video data. Encoded blocks of video data and associated tag ID's are provided from the graphics processor to a display interface in communication with the graphics processor. A display controller in communication with the display interface receives the encoded blocks of video data and associated tag ID's. The tag ID associated with a respective encoded block of video data is interpreted. It is determined whether to decode at least part of the respective encoded block of video data according to the tag ID. Decoded blocks of video data are provided from the display controller to a display in communication with the display controller. The display is configured to display the decoded blocks of video data.[0016] These and other methods and apparatus of aspects of the present application may be implemented using various types of hardware, software, firmware, etc., and combinations thereof. For example, some features of the application may be implemented, at least in part, by computer programs embodied in machine-readable media. The computer programs may include instructions for operating, at least in part, the devices described herein. These and other features and benefits of aspects of the application will be described in more detail below with reference to the associated drawings.BRIEF DESCRIPTION OF THE DRAWINGS[0017] The included drawings are for illustrative purposes and serve only to provide examples of possible structures and process steps for the disclosed methods, apparatus, and systems for processing frames of video data sent across a display interface using a block-based encoding scheme and a tag ID.[0018] FIG. 1 is a block diagram of an electronic device for processing a sequence of frames of video data across a display interface using a block-based encoding scheme and a tag ID, constructed according to one embodiment. [0019] FIG. 2 is a block diagram of an alternative embodiment of an electronic device for processing a sequence of frames of video data across a display interface using a block-based encoding scheme and a tag ID.[0020] FIG. 3 is a diagram illustrating a packet of a compressed block of video data in a frame using a Run Length Encoding (RLE) scheme and a tag ID, in accordance with one embodiment.[0021] FIG. 4 is an illustration of a set of tag ID parameters in a compressed block of video data, in accordance with one embodiment.[0022] FIG. 5 is a flow diagram of a method for processing a sequence of frames of video data across a display interface using a block-based encoding scheme and a tag ID, performed in accordance with one embodiment.[0023] FIG. 6 is a flow diagram of a method for determining whether to decode an encoded block of video data according to a tag ID, performed in accordance with one embodiment.[0024] FIG. 7 is a system block diagram illustrating one embodiment of an electronic device incorporating an interferometric modulator display.[0025] FIGS. 8A and 8B are system block diagrams illustrating an embodiment of a visual display device comprising a plurality of interferometric modulators.DETAILED DESCRIPTION[0026] While the present application will be described with reference to a few specific embodiments, the description and specific embodiments are merely illustrative and are not to be construed as limiting. Various modifications can be made to the described embodiments without departing from the true spirit and scope as defined by the appended claims. For example, the steps of methods shown and described herein are not necessarily performed in the order indicated. It should also be understood that the methods may include more or fewer steps than are indicated. In some implementations, steps described herein as separate steps may be combined. Conversely, what may be described herein as a single step may be implemented in multiple steps.[0027] Similarly, device functionality may be apportioned by grouping or dividing tasks in any convenient fashion. For example, when steps are described herein as being performed by a single device (e.g., by a single logic device), the steps may alternatively be performed by multiple devices and vice versa. Moreover, the specific components, parameters, and numerical values described herein are provided merely by way of example and are in no way limiting. The drawings referenced herein are not necessarily drawn to scale. [0028] Embodiments of the present application overcome some of the drawbacks of conventional electronic devices, by reducing the amount of power consumed at the display stage. By incorporating embodiments of the present application, electronic devices are able to reduce this power drain, which is a significant component of the overall power consumption of the device. Thus, some of the features described herein provide for a longer lasting memory display, such as a bi-stable display, for instance, in a battery-powered mobile reading device.[0029] The apparatus and methods described herein leverage thecharacteristics of both the content of the video data being transmitted as well as the features of memory displays. As used herein, "memory display" refers to any display having a memory function, that is, where the display is capable of retaining displayed video data. Examples of suitable memory displays include bi-stable displays as well as other types of displays incorporating memory devices such as frame buffers. With respect to the content, one technique involves the use of tag ID's associated with blocks of video data sent across the display interface. Embodiments of the present application can use a block-based approach to sending data across the display interface, in which individual blocks of pixels within a frame of video data are processed. A tag ID generator is provided on the graphics processor side of the display interface, as further explained below, and a counterpart tag ID reader is located on the display controller side. The tag ID generator generates a tag ID for unique blocks of video data being sent. The tag ID reader interprets the tag ID to determine whether to write a particular block to the display. [0030] A second technique described herein uses a block-based encoder, for instance, a Run Length Encoder on the graphics processor side, and a counterpart block-based decoder on the display controller side of the display interface. In some implementations, Run Length Encoding (RLE) is desirable because it is lossless, meaning no loss is introduced by the encoding scheme in signals sent from the graphics processor to the display controller. In addition, RLE is desirable because it can be simple to implement, thus reducing code delay and processing power. In some embodiments, RLE is performed according to color of the pixels. The data in images, particularly in sub-portions or blocks of the image is often correlated by color. Thus, higher encoding and decoding efficiency can be achieved by grouping red, green, and blue pixels together, for example. Also, depending on the desired implementation, raster scanning or serpentine scanning can be used to read and encode the pixel value colors row-by-row or in some other sequence within a block.[0031] Different encoding and decoding schemes can be incorporated into embodiments of the present application, as an alternative to RLE. Examples of such schemes include Arithmetic Coding (AC) and Huffman Coding (HC). AC and HC are useful in some implementations in which more compression is desired.[0032] In one embodiment, the encoder is configured to encode m x n blocks within in each frame of video data. The m x n block could be variable or fixed size, depending on the implementation. Also, when implementing block-based encoding and decoding in this manner, tradeoffs can be made between memory size code delay, implementation delay, and compression efficiency, by varying the m x n size.Encoding successive blocks of pixel data in this manner can take advantage of spatial correlation and colors in most images, thus significantly reducing the size of the data to send across the display interface. For instance, for each m x n block, a Run Length Encoded packet can be generated and sent to the display controller. The block-based decoder is configured to decode and output the data in the packet when the associated tag ID indicates it is appropriate to do so.[0033] Other apparatus and methods in addition to the use of tag ID's and block-based encoding/decoding are disclosed herein. The embodiments incorporating the various features are applicable to a variety of displays, but are particularly beneficial for memory-based display technology. For instance, because bi-stable displays have a memory state, bi-stable displays do not have a requirement on the display controller to provide continuous updates of video data to the display. Bi-stable displays can afford some latency. Thus, the display controller need not decode and output every block or frame of data it receives when the data is redundant, i.e., a copy of previously received data for the region of the display corresponding to the received block. Also, using RLE in combination with memory-based displays facilitates the handling of "bursty" data signals, i.e., including data which is uneven in nature.[0034] Embodiments of the present application can be incorporated in a variety of modern electronic devices, particularly those in which it is desirable to incorporate energy-efficient bi-stable displays, such as Interferometric Modulator Displays (IMODs), Cholesteric LCDs (ChLCDs), electrophoretics (e-ink), and other displays that have bi-stable properties. The techniques described herein optimize the architecture of graphics processors and display controllers for such displays. The amount of signaling required between the graphics processor and the display controller, i.e., over the display interface, is reduced to lower the overall energy consumption of the device.[0035] Embodiments of the present application can be incorporated into electronic devices having other types of memory displays, i.e., displays having a frame buffer or other memory unit local to the display so that incoming video data can be buffered. For instance, as described in greater detail below, a frame buffer can be provided on the display controller side of the display interface and used to buffer data provided from the display controller to the display.[0036] FIG. 1 is a block diagram of an electronic device 100 for processing a sequence of frames of video data across a display interface using a block-based encoding scheme and a tag ID, constructed according to one embodiment. In FIG. 1, a stream of video data 104 is provided as an input to a graphics processor 108. The graphics processor 108 is in communication with a frame buffer 112 implemented, for example, as a bank of SDRAM. In this way, as the graphics processor 108 receives frames of input video data 104, graphics processor 108 is capable of storing the frames in frame buffer 112. [0037] In FIG. 1, graphics processor 108 is in communication with a display interface 116. Video data that has been processed by graphics processor 108, using techniques described herein, is output from graphics processor 108 to display interface 116 for passing the processed data over one or more communications lines to a display controller 120, also in communication with display interface 116.[0038] In FIG. 1, depending on the desired implementation, display interface 116 can be configured according to a particular communications standard, such as the Mobile Industry Processor Interface (MIPI) standard, the Mobile Display Digital Interface (MDDI) standard, and the High-Definition Multimedia Interface (HDMI) standard. An example of a suitable bandwidth of the display interface 116 is in the range of 6-24 bits wide. However, features of the present application are applicable to display interfaces of other suitable bandwidths.[0039] The MIPI standard, which is a serial interface providing differential signaling, is a common interface standard for electronic devices with smaller displays, for instance, cell phones. In such implementations, the bandwidth of the display interface 116 can be relatively smaller, for instance, 6 bits wide. MDDI is another standard used for electronic devices 100 with smaller displays. The encoding and selective decoding techniques using tag ID's, as described herein, are equally applicable to electronic devices having larger displays, such as those having an HDMI standard at display interface 116.[0040] As shown in the embodiment of FIG. 2, described in greater detail below, the communications lines comprising display interface 116 include a clock signal line 204 ("CLK") and one or more other control signal lines 208, for instance, providing vertical and horizontal synchronization signals, "VSync" and "HSync," respectively. These communications lines illustrated in FIG. 2 represent one physical implementation of display interface 116 as an RGB interface, in which red, green, and blue data is provided over the 6-24 bit data channel mentioned above.[0041] In another embodiment, for instance, when display interface 116 is implemented in accordance with the MIPI or the Low- Voltage Differential Signaling (LVDS) standard, interface 116 can have a different physical configuration. When LVDS is used, a serializing transmitter and a de-serializing receiver can be situated on opposite sides of display interface 116. The transmitter would encode the video data and clock signal to be sent over interface 116 into a differential serial signal. The receiver would be operatively coupled on the display controller side to receive differential data sent over interface 116, perform serial to parallel conversion of the data, and provide the converted data to the display controller. In otherimplementations, display interface 116 can be configured as a memory-mapped interface, for instance, with a multiplexed address and data bus.[0042] In FIGS. 1 and 2, the disclosed techniques for encoding and selectively decoding video data using tag ID's are applicable to a variety of configurations of display interface 116. As mentioned above, this represents an improvement over conventional schemes, in which no compression is applied to data sent across a display interface. With conventional devices, the data sent across a display interface is uncompressed, irrespective of the standard according to which the display interface might be configured. In FIGS. 1 and 2, the techniques disclosed herein provide for encoding and selective decoding of data, which can be transmitted across display interface 1 16 in serial fashion and with differential signaling.[0043] Returning to FIG. 1, display controller 120 is in communication with a display 124, which may be an LCD display, in one embodiment, or a memory display such as a bi-stable display, in another embodiment. The display controller 120 drives display 124 so that display 124 is capable of displaying video data received from display controller 120. In the case of a bi-stable display, display 124 can be constructed as an IMOD, a ChLCD, or an electrophoretic display. In oneembodiment, display controller 120 and display 124 are in communication with a frame buffer 128 or other suitable memory unit in which processed data can be stored by controller 120 before being output to display 124. In one implementation, the display controller 120, frame buffer 128, and display 124 can be constructed as an integral unit.[0044] FIG. 2 shows a block diagram of an alternative embodiment of an electronic device 200 for processing a sequence of frames of video data across a display interface, constructed according to another embodiment. The electronic device 200 of FIG. 2 is similar to electronic device 100 of FIG. 1 in most respects, with like reference numerals indicating like parts in the respective diagrams. FIG. 2 illustrates separate modules, which provide the solutions of encoding and selective decoding of data, as well as the generation and reading of tag ID's associated with packets of data sent across display interface 116. In particular, one of the solutions described herein adds a block-based encoder 212 and a tag ID generator 216 to the graphics processor side of display interface 116, while a counterpart block-based decoder 220 and tag ID reader 224 are added on the display controller side of interface 116. The block-based encoder 212 and tag ID generator 216 can be constructed as separate modules apart from graphics processor 108, as shown in FIG. 2. Similarly, block-based decoder 220 and tag ID reader 224 can be constructed as separate modules from display controller 120, as illustrated. Alternatively, modules 212 and 216 can be integrated as processing units of graphics processor 108, as shown in FIG. 1. By the same token, block-based decoder 220 and tag ID reader 224 can be integral processing units of display controller 120 in the embodiment of FIG. 1.[0045] In one embodiment, block-based encoder 212 and block-based decoder 220 cooperate to encode and decode blocks of video data using the RLE scheme. RLE is a form of encoding in which runs of data, that is, sequences in which the same pixel value occurs in consecutive data elements, are stored as a single data value and count, rather than as the original run.[0046] As described in further detail below, the RLE scheme can be applied to portions of a frame of video data to be transmitted across display interface 116. Using the RLE scheme in this manner saves energy by reducing the amount of data sent over display interface 116. Block-based encoder 212 can apply the RLE technique or other encoding schemes to take advantage of spatial correlations in the video data to compress the data before sending it. For example, a frame of video data retrieved by graphics processor 108 can be separated into 8x8 blocks. Thus, for instance, an all black image in a particular 8x8 block of pixels could be encoded by block-based encoder 212, applying the RLE scheme, as an L64c0x0 or length 64, color 0 (black) sequence. Thus, for a black block of 8x8 pixels, the RLE scheme saves 192 bytes of data, assuming the data is 24 bits. The handling of video data in frames and division into blocks is described in greater detail below.[0047] In FIG. 2, frame buffer 112 of FIG. 1 has been implemented as a plurality of frame buffers 112a-l 12c. Separate frame buffers 112a-l 12c can be used by graphics processor 108 to store and retrieve separate frames of video data. In addition, graphics processor 108 can perform operations on the separate frames of video data and store resulting calculations, such as comparison data, in different locations within the frame buffer array 112a-l 12c, as described herein. Frame buffers 112a-l 12c can be located off-chip from graphics processor 108 or, alternatively, formed as integral units with processor 108, depending on the desiredimplementation.[0048] FIG. 3 is a diagram illustrating the conversion of blocks of video data in a frame to compressed packets using RLE and tag ID's, in accordance with one embodiment. In FIG. 3, an uncompressed frame 304 of video data is retrieved from one of frame buffers 112a-l 12c by graphics processor 108 of FIG. 2. Graphics processor 108 is configured to divide frame 304 into a total of N individual blocks (block 1, block 2, ... block N) of a designated m x n size. The block-based encoder 212 is configured to encode each individual m x n block of pixels as part of a compressed packet 308, as shown in FIG. 3. Often, the encoded packet 308 will also include an "escape" character to indicate to the decoder that the end of the block has been reached. The escape character can be implemented in different manners, often depending on the format of the data being sent. Such an escape character or other limiting mechanism can serve to limit memory usage on the display controller side of interface 1 16.[0049] The tag ID generator 216 is configured to generate a tag ID with each encoded block of video data. The tag ID, in the embodiment of FIG. 3, is included at the beginning or top of the header of packet 308, as shown in FIG. 3, to indicate the type of data included in packet 308. In addition, graphics processor 108 of FIG. 2 is configured to identify a number of bytes in the compressed block 308 and also include this information in the header, as shown in FIG. 3. Thus, on the receiving side of display interface 116, display controller 120 can immediately determine the size of packet 308 in addition to the type of data indicated by the tag ID.[0050] FIG. 4 is an illustration of a set of possible tag ID parameters in a compressed packet 308, in accordance with one embodiment. Applying techniques described herein, the tag ID generator 216 associated with graphics processor 108 is capable of generating a variety of tag parameters to identify the type of data included in the associated encoded m x n block of data within packet 308. For instance, as shown in FIG. 4, the tag ID component of packet 308 can indicate whether the included block represents the start of a new frame of video data or a redundant frame of video data. In addition, the tag ID can indicate the start of a new block of video data within a frame, as well as whether the encoded block is redundant in view of the previous block. In this way, on the display controller side of display interface 116, responsive to tag ID reader 224 processing one or more of the tag ID's of FIG. 4, the display controller can determine whether to decode the included block of encoded video data, as further described below. For instance, when the tag ID at the beginning of a packet 308 indicates that the encoded block is redundant, display controller 120 can disregard the included data. That is, since the previous block is the same, the new block does not need to be output to display 124.[0051] In FIG. 4, the tag ID component of packet 308 can be represented as a sequence of bits to indicate one or more of the tag ID parameters. For instance, the four tag ID parameters described and illustrated in FIG. 4 could be represented with a 2-bit code (e.g., 00, 01, 10, 11). More common allocations for the tag ID are 4-bit wide and 8-bit wide values. In most implementations, the tag ID is preferably as wide as the rest of the video data being sent in packet 308. The width of the tag ID in packet 308 can have other sizes, depending on the desired implementation. In a 4-bit wide implementation, a respective bit could indicate a respective one of the tag ID's shown in FIG. 4. For instance, a "1100" tag ID could indicate that the encoded block represents both the start of a new frame and the start of a new block of video data to be displayed.[0052] The operations carried out to generate tag ID's at graphics processor 108 and read tag ID's at display controller 120 are described in further detail below, following a general discussion of embodiments of methods for encoding and selectively decoding blocks of video data using the apparatus of FIGS. 1 and 2.[0053] FIG. 5 shows a flow diagram of a method 500 for processing a sequence of frames of video data across a display interface, performed in accordance with an embodiment of the present application. The operations of method 500 are described primarily with reference to the apparatus of FIG. 2, but should be understood to equally apply to electronic device 100 of FIG. 1. In 504, graphics processor 108 receives a stream of input video data 104 and stores frames of the sequence in one or more frame buffers 112a-l 12c. In 508, graphics processor 108 is capable of retrieving individual frames from frame buffers 112a-l 12c for processing. In 512, once a frame is retrieved by graphics processor 108, block-based encoder 212 can apply RLE or another encoding scheme describe herein to encode m x n blocks of data in the frame, as illustrated in FIG. 3. In 516 and 520, tag ID generator 216 is configured to generate an appropriate tag ID to associate with individual blocks encoded by encoder 212. In one embodiment, in 516, compare operations can be performed between successive blocks of video data in a frame to determine the appropriate tag ID.[0054] In FIG. 5, in 516, logic can be implemented and configured at graphics processor 108 to compare successive blocks of data to determine an appropriate tag ID. In one embodiment, for example, a sequence of blocks within a frame can be identified by memory addresses within one or more of the frame buffers 112a-l 12c. As individual blocks in a sequence are retrieved by graphics processor 108, pixel values of two blocks in a sequence can be compared to determine whether the data is redundant or new. A similar set of logic at graphics processor 108 can be applied to respective frames in a sequence to similarly identify redundant frames and set the appropriate tag ID, as shown in FIG. 4. Separate frame buffers can be used to do the comparisons. For example, the first frame or block in a sequence could be stored in frame buffer 112a, the second frame or block in a sequence stored in buffer 112b, and the output of the compare operation could be stored in buffer 112c.[0055] In FIG. 5, in 520, tag ID generator 216 is capable of outputting the appropriate tag ID responsive to the operations performed in 516. In this way, in 524, graphics processor 108 outputs packets of respective encoded blocks and associated tag ID's, as illustrated in FIG. 3, to display controller 120 via display interface 116. Over time, sequences of encoded blocks and tag ID's are sent across display interface 116.[0056] In FIG. 5, in 528, on the other side of display interface 116, display controller 120 receives the encoded packets. In 532, tag ID reader 224 interprets the tag ID associated with each encoded block in the packet. In 536, based on the tag ID parameter, as illustrated in FIG. 4, display controller 120 can then determine whether to decode the associated encoded block of data. This determination in 536 is described in further detail below, with reference to FIG. 6. In 540, depending on the determination made in 536, display controller 120 is configured to output decoded blocks of video data to display 124. [0057] FIG. 6 shows a flow diagram of a method 536 for determining whether to decode an encoded block of video data according to a tag ID. In 604, display controller 120 checks to see whether the tag ID indicates the start of a new block of video data, for instance, if tag 2 in FIG. 4 has a "1" or "On" value. If so, in 608, block-based decoder 220 will decode the block of data. Thus, in general, as packets of encoded data are received on the display controller side, tag ID reader 224 will process the first byte of the packet, which is generally the tag ID. The decoder 220 will respond according to what the tag indicates. Thus, in 612, display controller 120 is configured to check whether the tag ID indicates the start of a redundant block of video data. If so, in 616, display controller 120 will ignore the block. Often, when a block is ignored, in 620, display controller 120 is configured to output the previous decoded block in the sequence of received packets, since the data in the blocks are the same. In this instance, display controller 120 will still update display 124, but is using existing information that was decoded and displayed in the last cycle, i.e., when the previous block was processed. The data is essentially copied for the present cycle. [0058] Thus, in FIGS. 5 and 6, block-based decoder 220 is triggered to decode new blocks of data and ignore redundant blocks of data, according to what the tag ID attached to each block indicates. The block-based decoder 220 is triggered to decode the appropriate blocks by display controller 120.[0059] Returning to FIGS. 3 and 4, in one embodiment, the first byte in each compressed packet is the unique tag ID. In this way, as tag ID reader 224 of FIG. 2 receives and processes sequences of blocks, tag ID reader 224 can identify the tag ID as the initial data in the packet. Block-based decoder 220 can then decode new blocks of data and store the decoded data in a line buffer as RGB data to be output to display 124. [0060] In FIGS. 1 and 2, the apparatus comprising electronic devices 100 and200 is primarily implemented in hardware. Certain mechanisms and operations described herein could be implemented in software or in combinations of hardware and software. In certain hardware implementations, in which the graphics processor 108, encoder 212, and tag ID generator 216 are implemented on the same chip, the operations and interactions of these components can be more optimized and efficient, thus consuming less power. For instance, graphics processor 108 could beimplemented as an ASIC with a video compression module to implement block-based encoder 212 and tag ID generator 216. Similarly, on the display controller side, block-based decoder 220 and tag ID reader 224 could be integrated with display controller 120 in a single chip or circuit. Thus, on the display controller side, additional power savings and optimization can be achieved, contributing to the overall efficiency of electronic devices 100 and 200.[0061] Implementations of the methods and apparatus described herein provide for reducing the amount of data sent across display interface 116. The amount of active time that the CLK signal 204 of FIG. 2 needs to be on is reduced. This represents a significant reduction in the amount of power consumed at display interface 1 16.[0062] Embodiments of the methods and apparatus described herein bring the power-saving benefits of compression and decompression to the display interface 116. The techniques described herein do so without much cost in the way of additional circuitry, as illustrated by the incorporation of the block-based encoder and tag ID generator in graphics processor 108 and incorporation of block-based decoder 220 and tag ID reader 224 into display controller 120, as shown in FIG. 1. RLE and tag ID capabilities can be built into integrated circuits so the resulting chip real estate is small and has little additional cost. [0063] Using the block-based approaches described herein provideopportunities for exploiting areas of a display screen that have redundant content. This is to be contrasted with raster scan technology used in display interfaces, thus maximizing the benefit for bi-stable and other memory-based displays. For instance, with video signals having primarily textual content, the display interface write time could be reduced by 30-50%. Reducing the write time at the display interface corresponds to a reduction in time that the interface is required to be active. The power consumption of the various components active on both sides of display interface 116 is also reduced.[0064] The embodiments described herein may be implemented in any electronic device that is configured to display an image, whether in motion (e.g., video) or stationary (e.g., still image), and whether textual or pictorial. More particularly, it is contemplated that the embodiments may be implemented in or associated with a variety of electronic devices such as, but not limited to, mobile telephones, wireless devices, personal data assistants (PDAs), hand-held or portable computers, GPS receivers/navigators, cameras, MP3 players, camcorders, game consoles, wrist watches, clocks, calculators, television monitors, flat panel displays, computer monitors, auto displays (e.g., odometer display, etc.), cockpit controls and/or displays, display of camera views (e.g., display of a rear view camera in a vehicle), electronic photographs, electronic billboards or signs, projectors, architectural structures, packaging, and aesthetic structures (e.g., display of images on a piece of jewelry).[0065] FIG. 7 is a system block diagram illustrating one embodiment of an electronic device that may incorporate apparatus described herein. The electronic device may, for example, form part or all of a portable display device such as a portable media player, a smartphone, a personal digital assistant, a cellular telephone, a smartbook or a netbook. Here, the electronic device includes a controller 21, which may include one or more general purpose single- or multi-chip microprocessors such as an ARM®, Pentium®, 8051, MIPS®, Power PC®, or ALPHA®, or special purpose microprocessors such as a digital signal processor, microcontroller, or a programmable gate array. Controller 21 may be configured to execute one or more software modules. In addition to executing an operating system, the controller may be configured to execute one or more software applications, including a web browser, a telephone application, an email program, or any other software application. The graphics processor 108 of FIGS. 1 and 2 can be implemented as a module of controller 21. [0066] The controller 21 is configured to communicate with a display controller 120, as shown in FIGS. 1 and 2, and in FIGS. 7 and 8. In one embodiment, the display controller 120 includes a row driver circuit 24 and a column driver circuit 26 that provide signals to a display array or panel 30. The display controller 120 generally includes driving electronics for driving the display array 30. Controller 21 and display controller 120 may sometimes be referred to herein as being "logic devices" and/or part of a "logic system." Note that although FIG. 7 illustrates a 3x3 array of interferometric modulators for the sake of clarity, the display array 30 may contain a very large number of interferometric modulators, and may have a different number of interferometric modulators in rows than in columns (e.g., 300 pixels per row by 190 pixels per column). The display array 30 has rows 30a and columns 30b comprising the 3x3 or other size array of modulators. [0067] FIGS. 8 A and 8B are system block diagrams illustrating an embodiment of a display device 40, as one example of an electronic device 100 or 200, as described above. The display device 40 can be, for example, a cellular or mobile telephone. However, the same components of display device 40 or slight variations thereof are also illustrative of various types of display devices such as televisions and portable media players.[0068] The display device 40 includes a housing 41, a display 30, an antenna 43, a speaker 45, an input device 48, and a microphone 46. The housing 41 is generally formed from any of a variety of manufacturing processes, including injection molding, and vacuum forming. In addition, the housing 41 may be made from any of a variety of materials, including but not limited to plastic, metal, glass, rubber, and ceramic, or a combination thereof. In one embodiment the housing 41 includes removable portions (not shown) that may be interchanged with other removable portions of different color, or containing different logos, pictures, or symbols. [0069] The display 30 of exemplary display device 40 may be any of a variety of displays, including a bi-stable or other memory display, as described herein. In other embodiments, the display 30 includes a flat-panel display, such as plasma, EL, OLED, STN LCD, or TFT LCD as described above, or a non-flat-panel display, such as a CRT or other tube device. However, for purposes of describing the present embodiment, the display 30 includes an interferometric modulator display, as described herein. [0070] The components of one embodiment of exemplary display device 40 are schematically illustrated in FIG. 8B. The illustrated exemplary display device 40 includes a housing 41 and can include additional components at least partially enclosed therein. For example, in one embodiment, the exemplary display device 40 includes a network interface 27 that includes an antenna 43 which is coupled to a transceiver 47. The transceiver 47 is connected to a controller 21, which is connected to conditioning hardware 52. The conditioning hardware 52 may be configured to condition a signal (e.g. filter a signal). The conditioning hardware 52 is connected to a speaker 45 and a microphone 46. The controller 21 is also connected to an input device 48 and a driver controller 29. The driver controller 29 is coupled to a frame buffer 28, and to a display controller 120, which in turn is coupled to a display array 30. Conditioning hardware 52 and/or driver controller 29 may sometimes be referred to herein as part of the logic system. A power supply 50 provides power to all components as required by the particular exemplary display device 40 design. [0071] The network interface 27 includes the antenna 43 and the transceiver47 so that the exemplary display device 40 can communicate with one ore more devices over a network. In one embodiment the network interface 27 may also have some processing capabilities to relieve requirements of the controller 21. The antenna 43 is any antenna for transmitting and receiving signals. In one embodiment, the antenna transmits and receives RF signals according to the IEEE 802.11 standard, including IEEE 802.11(a), (b), or (g). In another embodiment, the antenna transmits and receives RF signals according to the BLUETOOTH standard. In the case of a cellular telephone, the antenna is designed to receive CDMA, GSM, AMPS, W- CDMA, or other known signals that are used to communicate within a wireless cell phone network. The transceiver 47 pre-processes the signals received from the antenna 43 so that they may be received by and further manipulated by the controller 21. The transceiver 47 also processes signals received from the controller 21 so that they may be transmitted from the exemplary display device 40 via the antenna 43.[0072] In an alternative embodiment, the transceiver 47 can be replaced by a receiver. In yet another alternative embodiment, network interface 27 can be replaced by an image source, which can store or generate image data to be sent to the controller 21. For example, the image source can be a digital video disc (DVD) or a hard-disc drive that contains image data, or a software module that generates image data.[0073] Controller 21 generally controls the overall operation of the exemplary display device 40. The controller 21 receives data, such as compressed image data from the network interface 27 or an image source, and processes the data into raw image data or into a format that is readily processed into raw image data. The controller 21 then sends the processed data to the driver controller 29 or to frame buffer 28 for storage. Raw data refers to the information that identifies the image characteristics at each location within an image. For example, such imagecharacteristics can include color, saturation, and gray-scale level.[0074] In one embodiment, the controller 21 includes a microcontroller, CPU, or other logic device to control operation of the exemplary display device 40.Conditioning hardware 52 generally includes amplifiers and filters for transmitting signals to the speaker 45, and for receiving signals from the microphone 46.Conditioning hardware 52 may be discrete components within the exemplary display device 40, or may be incorporated within the controller 21 or other components.[0075] The driver controller 29 takes the raw image data generated by the controller 21 either directly from the controller 21 or from the frame buffer 28 and reformats the raw image data appropriately for high speed transmission to the display controller 120. Specifically, the driver controller 29 reformats the raw image data into a data flow having a raster-like format, such that it has a time order suitable for scanning across the display array 30. Then the driver controller 29 sends the formatted information to the display controller 120. Although a driver controller 29, such as a LCD controller, is often associated with the system controller 21 as a stand- alone Integrated Circuit (IC), such controllers may be implemented in many ways. For example, they may be embedded in the controller 21 as hardware, embedded in the controller 21 as software, or fully integrated in hardware with the display controller 120.[0076] The display controller 120 receives the formatted information from the driver controller 29 and reformats the video data into a parallel set of waveforms that are applied many times per second to the hundreds and sometimes thousands of leads coming from the display's x-y matrix of pixels.[0077] In one embodiment, the driver controller 29, display controller 120, and display array 30 are appropriate for any of the types of displays described herein. For example, in one embodiment, driver controller 29 is a conventional display controller or a bi-stable display controller (e.g., an interferometric modulator controller). In another embodiment, display controller 120 is a conventional driver or a bi-stable display driver (e.g., an interferometric modulator display). In one embodiment, a driver controller 29 is integrated with the display controller 120. Such an embodiment is common in highly integrated systems such as cellular phones, watches, and other small area displays. In yet another embodiment, display array 30 is a bi-stable display array (e.g., a display including an array of interferometric modulators).[0078] The input device 48 allows a user to control the operation of the exemplary display device 40. In one embodiment, input device 48 includes a keypad, such as a QWERTY keyboard or a telephone keypad, a button, a switch, a touch- sensitive screen, a pressure- or heat-sensitive membrane. In one embodiment, the microphone 46 is an input device for the exemplary display device 40. When the microphone 46 is used to input data to the device, voice commands may be provided by a user for controlling operations of the exemplary display device 40.[0079] Power supply 50 can include a variety of energy storage devices as are well known in the art. For example, in one embodiment, power supply 50 is a rechargeable battery, such as a nickel-cadmium battery or a lithium ion battery. In another embodiment, power supply 50 is a renewable energy source, a capacitor, or a solar cell, including a plastic solar cell, and solar-cell paint. In another embodiment, power supply 50 is configured to receive power from a wall outlet.[0080] In some implementations control programmability resides, as described above, in a driver controller which can be located in several places in the electronic display system. In some cases control programmability resides in the display controller 120. The above-described optimization may be implemented in any number of hardware and/or software components and in various configurations. [0081] Returning to FIG. 2, in another alternative embodiment, the processing modules associated with the graphics processor 108, such as block-based encoder 212, tag ID generator 216, and frame buffers 112a-c are situated in a first device, such as a server computer in a server-based data processing network. In this embodiment, the display controller 120, display 124, tag ID reader 224, and block-based decoder 220, are situated in a second device, such as a client computer in the data processing network, separate from the first device. In this embodiment, the display interface 116 can be implemented between the server and client as one or more communications lines comprising the network. The graphics processor 108 in the host, i.e., server device is configured to send data to the display controller 120 in the client device in similar fashion as described above with respect to FIGS. 1-6. In some embodiments, the display interface 116 is implemented as a wireless interface between the server and client devices of the network. Significant power savings can be achieved in such embodiments, since the energy cost-per-bit of sending data wirelessly is generally greater than that in a wired configuration.[0082] Although illustrative embodiments and applications are shown and described herein, many variations and modifications are possible which remain within the concept, scope, and spirit, and these variations should become clear after perusal of this application. Accordingly, the present embodiments are to be considered as illustrative and not restrictive, and the application is not to be limited to the details given herein, but may be modified within the scope and equivalents of the appended claims. |
The invention relates to a method of forming a tier of an array of memory cells within an array area, the memory cells individually comprising a capacitor and an elevationally- extending transistor, the method comprising using two, and only two, sacrificial masking steps within the array area of the tier in forming the memory cells. Other methods are disclosed, as are structures independent of method of fabrication. |
1. A method of forming a layer of an array of memory cells within an array region, said memory cells individually comprising capacitors and vertically extending transistors, said method comprising using two and only two The memory cells are formed using two sacrificial masking steps, in which individual ones of the capacitors are formed with one of their capacitor electrodes directly against the vertical Individual ones of the extension transistors vertically extend the lateral sides of the upper source/drain regions of the transistors.2. The method of claim 1 , wherein in each of the two sacrificial masking steps, the exposed channel-containing material of the transistor in the array area is subtractively etched while using the sacrificial masking material over the unexposed portions of the channel-containing material in the array area as a mask.3. The method of claim 1 wherein in each of the two sacrificial masking steps, material within the array region is subtractively etched and neither of the two sacrificial masking steps etches material of the capacitor within the array region.4. The method of claim 1, wherein a sequential first of the two sacrificial masking steps includes using a sacrificial masking material to mask digit line material while subtractively etching the exposed digit line material to Digital lines are formed beneath the sacrificial masking material and ultimately beneath the transistors and capacitors formed within the array area.5. The method of claim 1 , comprising etching gate material of the transistor to form a transistor gate, and etching all material forming the capacitor to form the capacitor within the array area without any masking material located thereover during such etching.6. The method of claim 1 including forming individual memory cells as 1T-1C.7. The method of claim 1 including forming individual memory cells as 2T-2C.8. A method of forming a memory cell array individually comprising capacitors and vertically extended transistors, the method sequentially comprising:patterning a digit line material and a channel-containing material thereover along a first direction to form a digit line within the array using a first sacrificial mask such that a line of the channel-containing material is located above the digit line;The channel-containing material is patterned in a second direction different from the first direction using a second sacrificial mask to cut the line of channel-containing material over the digit line into the spaced apart individual channels of individual transistors of individual memory cells within the array;forming a gate insulator and an access line laterally across and operatively laterally adjacent to lateral sides of the individual channels of the individual transistors; andCapacitors are formed that individually have one of their capacitor electrodes directly against a lateral side of an upper source/drain region of one of the individual transistors of the individual memory cells within the array.9. The method of claim 8, wherein said forming said capacitor forms individual ones of said one capacitor electrodes directly against said upper source/drain region A pair of two laterally opposite sides of the individual upper source/drain regions.10. The method of claim 9, wherein said forming said capacitor forms said individual capacitor electrodes of said plurality of one capacitor electrodes directly against said upper source/ There are no more than two laterally opposed sides of a respective upper source/drain region in the drain region.11. The method of claim 9, wherein said individual upper source/drain regions have completely surrounding peripheral lateral side surfaces, said individual capacitor electrodes of said plurality of one capacitor electrodes directly abutting All of said fully surrounding peripheral lateral side surfaces of said individual upper source/drain regions.12. The method of claim 8, including forming the individual memory cells as 1T-1C.13. The method of claim 8, comprising forming the individual memory cells as 2T-2C.14. The method of claim 8, wherein the one capacitor electrode is individually formed directly against less than the entire lateral side of the respective upper source/drain region.15. The method of claim 14, wherein the one capacitor electrode is individually formed directly against less than half of the lateral side of the respective upper source/drain region.16. A method of forming layers of a memory cell array individually including capacitors and vertically extending transistors, comprising:forming a digit line material over the substrate, forming a channel-containing material over the digit line material, and forming a source/drain-containing material over the channel-containing material;Patterning the digit line material, the channel-containing material, and the source/drain-containing material to form digit lines within the array and to form vertically extending pillars that include individual Individual channels and individual upper source/drain regions of individual transistors of memory cells;forming gate insulators and access lines laterally across and operatively laterally adjacent lateral sides of the individual channels of the individual transistors;Forming first capacitor electrodes above first laterally opposed sides of the pillars directly against a pair of first laterally opposed sides of the respective upper source/drain regions within the array; andA capacitor insulator is formed over the first capacitor electrode and a second capacitor electrode is formed over the capacitor insulator within the array.17. The method of claim 16, wherein the patterning includes subtractive etching using more than one sacrificial masking step within the array layer.18. The method of claim 17, including forming the individual memory cells as 1T-1C or 2T-2C and using no more than two sacrificial masking steps within the array layer.19. The method of claim 16, wherein the pillar is formed to be conductive from the upper source/drain region to a top of the pillar.20. The method of claim 16, wherein the pillar is formed to be non-conductive from a top of the upper source/drain region to a top of the pillar.21. The method of claim 20, wherein the pillars are formed to be insulating from the top of the upper source/drain region to the top of the pillars.22. The method of claim 20, wherein the pillars are formed to be semiconductive from the top of the upper source/drain region to the top of the pillars.23. The method of claim 16, wherein the access lines are formed to include a gate insulator and an access line pair, the gate insulator and the access line pair laterally spanning the first pillar and the second pillar. A pair of laterally opposed sides of two legs extend, the gate insulator and the access line pair are operatively laterally adjacent to the individual ones of the first legs within the array. Laterally opposite sides of the channel, the formation of the gate insulator and the access line pair includes:forming a gate insulator over the tops and first laterally opposed sides of the pillars and the individual channels of the individual transistors and between lateral rows of adjacent pillars in the pillars;Access gate material is formed over the gate insulator, including over tops of the pillars, over the pillars and the first laterally opposed sides of the individual channels of the individual transistors, and on the between adjacent pillars in said transverse row; andMaskless anisotropic etching is performed from access line material located over the tops of the pillars and from interconnects between the laterally row of adjacent pillars and forms the access line pairs in respective individual row lines interconnecting the transistors in the row.24. The method of claim 23, comprising performing the maskless anisotropic etching of the access gate material in at least two etching steps separated by time.25. The method of claim 24, wherein lower portions of the trenches between the pillar rows are intercalated with a sacrificial material during a later step of the maskless anisotropic etching step, and the sacrificial material is removed prior to forming the first capacitor electrode.26. The method of claim 16, wherein forming the first capacitor electrode comprises:Over the tops of the pillars and the first laterally opposed sides, directly abutting the first laterally opposed sides of the respective upper source/drain regions and adjacent in laterally rows in the pillars A first capacitor electrode material is formed between the pillars; andThe maskless anisotropic etching is performed from the first capacitor electrode containing material located over the top of the pillars and from the interconnects between adjacent pillars in the lateral rows.27. A method of forming an array of memory cells individually comprising capacitors and vertically extended transistors, comprising:forming pillars extending vertically upward from the digit lines, the pillars individually containing individual channels and individual upper source/drain regions of individual transistors of individual memory cells within the array;forming a gate insulator and an access line laterally across and operatively laterally adjacent to lateral sides of the individual channels of the individual transistors;forming a first capacitor electrode completely surrounding and directly against all peripheral lateral sides of the respective upper source/drain regions within the array; andCapacitor insulators are formed over and completely surrounding individual ones of the first capacitor electrodes, and second capacitor electrodes are formed over and completely surrounding the capacitor insulators within the array.28. The method of claim 27, comprising forming the first capacitor electrode to have a flat top vertically coincident with the flat top of its surrounding strut.29. The method of claim 27, comprising forming the first capacitor electrode to have a top that is non-vertically coincident with a top of its surrounding strut.30. The method of claim 27, wherein the method does not etch material of the pillar after forming the first capacitor electrode.31. The method of claim 27, comprising, after forming the first capacitor electrode, etching material of the support selectively relative to the first capacitor electrode before forming the capacitor insulator.32. The method of claim 31, comprising etching away more than half of all post material prior to forming the capacitor insulator.33. The method of claim 31, comprising forming the capacitor insulator and the second capacitor electrode laterally over radially inner and radially outer sides of a majority of individual ones of the first capacitor electrodes.34. The method of claim 33 including forming the capacitor insulator directly against the top of the surrounded respective upper source/drain regions.35. The method of claim 33, comprising laterally over all of the radially outer sides of the individual first capacitor electrodes and laterally over only some of the radially inner sides of the individual first capacitor electrodes. The capacitor insulator is formed above.36. A method of forming a memory cell array individually including capacitors and vertically extending transistors, comprising:forming a digit line material above the substrate, forming a channel-containing material above the digit line material, and forming a source/drain-containing material above the channel-containing material;Patterning the digit line material, the channel-containing material, and the source/drain-containing material along a first direction to form digit lines within the array such that the channel-containing material line and the source/drain-containing material The line containing source/drain material is located above the digit line;Forming a first material in a trench laterally between the digit lines within the array and the lines containing channel material and the lines containing source/drain material above them ;patterning the channel-containing material, the source/drain-containing material, and the first material in a second direction different from the first direction to form vertically extending pillars that include individual channels and individual upper source/drain regions of individual transistors of individual memory cells within the array and with the first material laterally located between the pillars;Gate insulators and access line pairs are formed laterally across a pair of first laterally opposed sides of the pillars, the pair of first laterally opposed sides being operably laterally adjacent to the individual trenches within the array a pair of first laterally opposed sides of the tract;forming a second material in a trench laterally between the struts and the first material within the array;removing the first material and the second material sufficiently to expose surrounding peripheral lateral sides of the respective upper source/drain regions;forming a first capacitor electrode completely surrounding and directly against all of the surrounding peripheral lateral sides of the respective upper source/drain regions within the array; andCapacitor insulators are formed over and completely surrounding individual ones of the first capacitor electrodes and over and completely surrounding the capacitor insulators within the array. The capacitor insulator within the array forms a second capacitor electrode.37. The method of claim 36, wherein the first material and the second material are formed to have the same composition as each other.38. The method of claim 36, wherein the first material and the second material are formed to have different compositions from each other.39. The method of claim 36, comprising forming the first capacitor electrode to have a top that is higher than a top of the surrounding post.40. A method of forming an array of memory cells individually comprising capacitors and vertically-extended transistors, comprising:forming alternating first and second vertically extending pillars, the first vertically extending pillars extending vertically upward from a digit line and individually including respective channels and respective upper source/drain regions of respective transistors of respective memory cells within the array;forming gate insulators and access lines laterally across and operatively laterally adjacent lateral sides of the individual channels of the individual transistors;forming a first pair of capacitor electrode lines laterally across the first vertically extending pillar and the second vertically extending pillar, the first pair of capacitor electrode lines directly abutting a first pair of laterally opposing sides of the respective upper source/drain regions of respective first vertically extending pillars within the array;removing material of the second vertically extending pillars from lateral sides of the first capacitor electrode line pair and then cutting laterally through the lateral sides of the first capacitor electrode line pair to form first capacitor electrodes that individually abut directly against the first laterally opposing sides of the individual upper source/drain regions within the array; andA capacitor insulator is provided above the first capacitor electrode and a second capacitor electrode is provided above the capacitor insulator within the array.41. The method of claim 40, wherein said providing occurs after said cutting.42. The method of claim 40, wherein said providing occurs before said cutting.43. The method of claim 40, wherein said removing is removing all material of said second vertically extending strut.44. The method of claim 40, wherein the access lines are formed to include gate insulators and access line pairs, the gate insulator and the access line pairs laterally across the first vertical Extending toward a first pair of laterally opposed sides of the extension struts and the second vertically extending struts, the gate insulator and the access line pair are operatively laterally adjacent to the first pair within the array. First laterally opposed sides of said respective channels of respective first vertically extending struts of a vertically extending strut.45. An array of memory cells individually comprising capacitors and vertically extending transistors, said array comprising rows of access lines and columns of digit lines, comprising:individual ones of the columns including digit lines underlying channels of vertically extending transistors of individual memory cells within the array and interconnecting the transistors in the columns;An individual one of said rows including an access line located above said digit line, said access line laterally spanning and operatively laterally adjacent a lateral direction of said channel of said vertically extending transistor side extending and interconnecting the transistors in the row; andThe capacitors of the individual memory cells within the array individually comprise:a first capacitor electrode directly against a lateral side of an upper source/drain region of an individual one of the transistors within the array;a capacitor insulator positioned over the first capacitor electrode; andA second capacitor electrode is located above the capacitor insulator.46. The array of claim 45 wherein the access lines comprise an access line pair extending laterally across a first pair of laterally opposing sides of the channels of the vertically-extended transistors in the row.47. The array of claim 45 wherein the first capacitor electrodes are directly against laterally opposite sides of the upper source/drain regions of the individual transistors within the array.48. The array of claim 47, wherein the first capacitor electrode directly abuts no more than two laterally opposed sides of the upper source/drain regions of the individual transistors within the array.49. The array of claim 47, wherein individual upper source/drain regions have completely surrounding peripheral lateral side surfaces, individual first capacitor electrodes directly abutting all of the completely surrounding peripheral lateral side surfaces of the individual upper source/drain regions.50. The array of claim 45, wherein the first capacitor electrodes individually directly abut less than the entirety of the lateral sides of respective upper source/drain regions.51. The array of claim 45 wherein the first capacitor electrodes individually directly abut less than half of the lateral side of a respective upper source/drain region.52. The array of claim 45 wherein the capacitor insulator comprises a programmable material.53. The array of claim 52, wherein the capacitor insulator comprises a programmable ferroelectric material.54. The array of claim 45 wherein the individual memory cells are 1T-1C.55. The array of claim 45 wherein the individual memory cells are 2T-2C.56. An array of memory cells individually comprising capacitors and vertically extending transistors, said array comprising rows of access lines and columns of digit lines, comprising:individual ones of the columns including digit lines underlying channels of vertically extending transistors of individual memory cells within the array and interconnecting the transistors in the columns;An individual one of said rows including an access line located above said digit line, said access line laterally spanning and operatively laterally adjacent a lateral direction of said channel of said vertically extending transistor side extending and interconnecting the transistors in the row; andThe capacitors of the individual memory cells within the array individually include:a first capacitor electrode directly against a pair of first laterally opposed sides of upper source/drain regions of individual ones of the transistors within the array;a capacitor insulator located above the first capacitor electrode; andA second capacitor electrode located above the capacitor insulator.57. The array of claim 56 wherein the individual memory cells are 1T-1C.58. The array of claim 56 wherein the individual memory cells are 2T-2C.59. An array of memory cells individually comprising capacitors and vertically-extended transistors, the array comprising rows of access lines and columns of digit lines, comprising:Individual ones of the columns including digital lines located below the channels of vertically extending transistors of individual memory cells within the array and interconnecting the transistors in the columns;An individual one of said rows including an access line located above said digit line, said access line laterally spanning and operatively laterally adjacent a lateral direction of said channel of said vertically extending transistor side extending and interconnecting the transistors in the row; andThe individual memory cells include vertically extending pillars above the digit lines, the pillars including one of the channels of the vertically extending transistors and an upper source of an individual one of the transistors. /drain region, the pillar having a vertical thickness that is at least three times the vertical thickness of the one of the channels of the vertically extending transistor; andThe capacitors of the individual memory cells within the array individually comprise:a first capacitor electrode directly abutting a first pair of laterally opposed sides of the pillar and the upper source/drain region of a corresponding one of the individual transistors within the array;a capacitor insulator located above the first capacitor electrode; andA second capacitor electrode located above the capacitor insulator.60. The array of claim 59, wherein the pillars are formed to be conductive from the upper source/drain regions to tops of the pillars.61. The array of claim 59, wherein the pillars are formed to be non-conductive from the top of the upper source/drain regions to the top of the pillars.62. The array of claim 61 wherein the pillars are formed to be insulated from the top of the upper source/drain regions to the top of the pillars.63. The array of claim 61, wherein the pillars are formed to be semiconductive from the top of the upper source/drain region to the top of the pillars.64. The array of claim 59, wherein the first capacitor electrode has a top that is flat and vertically coincident with the flat top of its support.65. The array of claim 59, wherein the first capacitor electrode has a top that is non-vertically coincident with a top of its pillar.66. The array of claim 65, wherein the first capacitor electrode is flat on top.67. The array of claim 59 wherein the first capacitor electrode is directly against no more than two laterally opposing sides of the upper source/drain region of the corresponding one individual transistor within the array.68. The array of claim 59, wherein individual upper source/drain regions have completely surrounding peripheral lateral side surfaces, individual first capacitor electrodes directly abutting all of the completely surrounding peripheral lateral side surfaces of the individual upper source/drain regions.69. An array of memory cells individually comprising capacitors and vertically extending transistors, said array comprising rows of access lines and columns of digit lines, comprising:individual ones of the columns including digit lines underlying channels of vertically extending transistors of individual memory cells within the array and interconnecting the transistors in the columns;individual ones of the rows including access lines located above the digit lines, the access lines extending laterally across and operably laterally adjacent to lateral sides of the channels of the vertically-extended transistors and interconnecting the transistors in the row; andThe capacitors of the individual memory cells within the array individually include:an upwardly open and downwardly open first capacitor electrode cylinder completely surrounding and directly abutting all peripheral lateral sides of upper source/drain regions of individual ones of said transistors within said array;a capacitor insulator located above the radially outer and radially inner sides of the first capacitor electrode cylinder; andA second capacitor electrode is located above the capacitor insulator and above the radially outer side and the radially inner side of the first capacitor electrode cylinder.70. The array of claim 69, wherein the capacitor insulator and the second capacitor electrode are located above a portion of the radially inner side and a portion of the radially outer side of the first capacitor electrode cylinder.71. The array of claim 69 wherein the capacitor insulator is located laterally over all of the radially outer side of the first capacitor electrode cylinder and laterally over a portion of the radially inner side of the first capacitor electrode cylinder.72. The array of claim 69 wherein the capacitor insulator rests directly against the top of the surrounded individual upper source/drain regions.73. The array of claim 69 wherein the capacitor insulator rests directly against the top of the first capacitor electrode cylinder.74. The array of claim 69 wherein the capacitor insulator rests directly against the top of the surrounded individual upper source/drain regions and directly against the top of the first capacitor electrode cylinder. |
Memory cell array and method of forming layers of memory cell arrayTechnical FieldEmbodiments disclosed herein relate to methods of forming layers of memory cell arrays, methods of forming memory cell arrays individually including capacitors and vertically extending transistors, and memory cell arrays individually including capacitors and vertically extending transistors.Background techniqueMemory is a type of integrated circuit and is used in computer systems to store data. Memory can be fabricated as one or more arrays of individual memory cells. Memory cells can be written or read using digit lines (which may also be referred to as bit lines, data lines, sense lines, or data/sense lines) and access lines (which may also be referred to as word lines). Digit lines can conductively interconnect memory cells along the columns of the array, and access lines can conductively interconnect memory cells along the rows of the array. Each memory cell can be uniquely addressed by a combination of digit lines and access lines.Memory cells may be volatile or non-volatile. Non-volatile memory cells store data for long periods of time, including when the computer is turned off. Volatile memory is consumed and therefore needs to be refreshed/rewritten, in many cases many times per second. Regardless, the memory unit is configured to save or store memory into at least two different selectable states. In a binary system, states are considered either "0" or "1". In other systems, at least some individual memory cells may be configured to store more than two levels or states of information.A capacitor is a type of electronic component that can be used in a memory cell. A capacitor has two electrical conductors separated by an electrically insulating material. Energy as an electric field can be stored electrostatically in this material. Depending on the composition of the insulator material, the storage field will be volatile or non-volatile. For example, a capacitor insulator material that only includes SiO2 will be volatile. One type of non-volatile capacitor is a ferroelectric capacitor, which has a ferroelectric material as at least part of the insulating material. Ferroelectric materials are characterized by having two stable polarization states and thus can include programmable materials for capacitors and/or memory cells. The polarization state of a ferroelectric material can be changed by applying a suitable programming voltage and remains unchanged (at least for a period of time) after the programming voltage is removed. Each polarization state has a charge storage capacitance different from another polarization state, and it can be ideally used to write (i.e., store) and read the memory state without reversing the polarization state until it is desired to reverse this polarization state. In some memories with ferroelectric capacitors, the action of reading the memory state does not unsatisfactorily reverse the polarization. Thus, after the polarization state is determined, rewriting of the memory cell is performed to cause the memory cell to enter the pre-read state immediately after the polarization state is determined. Regardless, memory cells incorporating ferroelectric capacitors are ideally non-volatile because of the bistable nature of the ferroelectric material that forms part of the capacitor. Other programmable materials may be used as capacitor insulators to render the capacitor non-volatile.BRIEF DESCRIPTION OF THE DRAWINGSFigure 1 is a diagrammatic top plan view of a substrate construction in a process according to an embodiment of the present invention.FIG. 2 is a perspective view of a portion of FIG. 1 .3 is a view of the construction of FIG. 2 at a processing step subsequent to the steps shown in FIGS. 1 and 2 .FIG. 4 is a view of the construction of FIG. 3 in a processing step subsequent to the step shown in FIG. 3 .FIG. 5 is a view of the construction of FIG. 4 in a processing step subsequent to the step shown in FIG. 4 .6 is a diagram of the construction of FIG. 5 at a processing step subsequent to the step illustrated in FIG. 5 .7 is a view of the construction of FIG. 6 at a processing step subsequent to the step illustrated in FIG. 6 .Figure 8 is a cross-sectional view taken through line 8-8 in Figure 7.Figure 9 is a cross-sectional view taken through line 9-9 in Figure 7.FIG. 10 is a cross-sectional view taken through line 10 - 10 in FIG. 7 .FIG. 11 is a front view of the construction of FIG. 7 in a processing step subsequent to the step shown in FIG. 7 .12 is a view of the construction of FIG. 11 at a processing step subsequent to the step illustrated in FIG. 11 .FIG. 13 is a view of the construction of FIG. 12 in a processing step subsequent to the step shown in FIG. 12 .14 is a view of the construction of FIG. 13 at a processing step subsequent to the step illustrated in FIG. 13 .FIG. 15 is a view of the construction of FIG. 14 in a processing step subsequent to the step shown in FIG. 14 .FIG. 16 is a perspective view of the construction of FIG. 15 in a processing step subsequent to the steps shown in FIG. 15 .17 is a view of the construction of FIG. 16 at a processing step subsequent to the step illustrated in FIG. 16 .18 is a front elevational view of the construction of FIG. 17 at a processing step subsequent to the step shown in FIG. 17 and taken through line 18 - 18 in FIG. 19 .Figure 19 is a perspective view of Figure 18.20 is a view of the construction of FIG. 18 at a processing step subsequent to the step illustrated in FIG. 18 .Figure 21 is a perspective view of the construction of Figure 20.22 is a front view of the construction of FIG. 21 in a processing step subsequent to the step shown in FIG. 21 .23 is a diagrammatic front view of a substrate configuration during a process according to an embodiment of the present invention.24 is a view of the construction of FIG. 23 at a processing step subsequent to the step illustrated in FIG. 23 .25 is a diagrammatic perspective view of a substrate configuration during a process according to an embodiment of the present invention.FIG. 26 is a view of the construction of FIG. 25 in a processing step subsequent to the step shown in FIG. 25 .27 is a view of the construction of FIG. 26 at a processing step subsequent to the step illustrated in FIG. 26 .28 is a front view of the construction of FIG. 27 at a processing step subsequent to the step illustrated in FIG. 27 .Figure 29 is a top view through line 29-29 in Figure 28.30 is a view of the construction of FIG. 28 in a processing step subsequent to the step shown in FIG. 28. FIG.Figure 31 is a top view through line 31-31 in Figure 30.32 is a cross-sectional view of the configuration of FIG. 31 in a processing step subsequent to the step shown in FIG. 31 and taken horizontally through the uppermost portion of upper source/drain region 44.33 is a view of the construction of FIG. 32 in a processing step subsequent to the step shown in FIG. 32. FIG.34 is a view of the construction of FIG. 33 at a processing step subsequent to the step shown in FIG. 33 .35 is a front view of the construction of FIG. 34 in a processing step subsequent to the step shown in FIG. 34. FIG.Figure 36 is a diagrammatic front view of a substrate construction in process according to an embodiment of the present invention.Figure 37 is a cross-sectional view taken through line 37-37 in Figure 36.38 is a view of the construction of FIG. 37 at a processing step subsequent to the step shown in FIG. 37 .Figure 39 is a view of the construction of Figure 38 in a processing step subsequent to the step shown in Figure 38.Figure 40 is a front view of the construction of Figure 39 in a processing step subsequent to the step shown in Figure 39.FIG. 41 is a view of the construction of FIG. 40 in a processing step subsequent to the step shown in FIG. 40 .FIG. 42 is a cross-sectional view taken through line 42 - 42 in FIG. 41 .43 is a schematic diagram of a two-transistor/two-capacitor memory (2T/2C) cell according to an embodiment of the present invention.Figure 44 is a hybrid schematic and diagrammatic front view of a 2T/2C configuration in accordance with an embodiment of the present invention.Detailed waysEmbodiments of the invention encompass methods of forming arrays of memory cells that individually include capacitors and vertically extended transistors and arrays of such memory cells independent of fabrication methods.An example embodiment of a method of forming such an array is described first with reference to FIGS.Referring to FIGS. 1 and 2 , such figures depict a portion of a substrate segment or construction 10 including a base substrate 12 having an array or array region 14 within which an array of memory cells individually including vertically extending transistors and capacitors will be fabricated. Region 16 ( FIG. 1 ) is peripheral to array 14 and may be fabricated to include circuit components (i.e., circuits). Individual memory cells will be fabricated within array 14 and array 14 may include rows of access lines and columns of digit lines. “Rows” and “columns” are used herein with respect to a series of access lines and a series of digit lines, respectively, along which individual memory cells have been or will be formed longitudinally within array 14. Rows may be straight and/or curved and/or parallel and/or non-parallel relative to one another, as may columns. Furthermore, rows and columns may intersect at 90° or at one or more other angles relative to one another. Peripheral region 16 may be considered a starting region and array 14 may be considered a stopping region where the repeating pattern of memory cells stops (e.g., stops at the peripheral edge of this repeating pattern), but rows of access lines and/or columns of digit lines may and most likely will extend into peripheral region 16.Base substrate 12 may include any one or more of a conductive/conductor material (i.e., conductive material herein), a semiconductive material, or an insulating/insulator material (i.e., electrically insulating material herein). In the context of the present invention, a conductive/conductor material has a combined intrinsic conductivity of at least 3x104Siemens/cm (i.e., 20° C. at all locations herein), rather than conductivity that can occur by moving positive or negative charge through the thin material (otherwise the thin material is intrinsically insulating). Non-conductive/non-conductor materials have a combined intrinsic conductivity of less than 3x104Siemens/cm. Insulating/insulator materials have a combined intrinsic conductivity of less than 1x10-9Siemens/cm (i.e., they are electrically resistive rather than conductive or semiconductive). Semiconductive materials have a combined intrinsic conductivity of less than 3x104Siemens/cm to 1x10-9Siemens/cm. Various materials are shown above base substrate 12. The materials may be beside, vertically inside, or vertically outside the materials depicted in Figures 1 and 2. For example, other portions or all of the fabricated components of the integrated circuit may be provided at a location above, around, or within substrate 12. Control circuits and/or other peripheral circuits for operating components within the memory array may also be fabricated, and the circuits may or may not be completely or partially located within the array or subarray. In addition, multiple subarrays may also be fabricated and operated independently, in conjunction, or otherwise relative to each other. As used in the present invention, a "subarray" may also be considered an array. In any event, any of the materials, regions, and structures described herein may be homogeneous or heterogeneous, and in any event may be continuous or discontinuous over any material overlying it. In addition, unless otherwise stated, each material may be formed using any suitable existing or to-be-developed technology, such as atomic layer deposition, chemical vapor deposition, physical vapor deposition, epitaxial growth, diffusion doping, and ion implantation.A digit line material 18 ( FIG. 2 ) has been formed over substrate 12, a channel-containing material 20 has been formed over digit line material 18, and a source/drain-containing material 22 has been formed over channel-containing material 20. In the present disclosure, “vertical,” “higher,” “upper,” “lower,” “top,” “on top,” “bottom,” “above,” “below,” “below,” “under,” “upward,” and “downward” are generally referred to in a vertical direction unless otherwise indicated. Furthermore, as used herein, “vertical” and “horizontal” are vertical directions or directions within 10° of perpendicularity relative to each other in three-dimensional space independent of the orientation of the substrate. “Horizontal” refers to a general direction along the surface of a major substrate (i.e., within 10°) and may be relative to a substrate being processed during fabrication. Moreover, in the present disclosure, “vertical extension” encompasses a range from vertical to a deviation from vertical of no more than 45°. Furthermore, relative to a field effect transistor, “vertical extension” and “vertical” are references to the orientation of the channel length of the transistor, along which current flows between two source/drain regions of the transistor at two different heights in operation. Example conductive digit line material 18 is one or more of an elemental metal, a mixture or alloy of two or more elemental metals, a conductive metal compound, and a conductive doped semiconductive material, with TiN being one specific example. Example channel-containing material 20 is a semiconductive material suitably doped with a conductivity enhancing material, with suitably doped polysilicon being one specific example. Example source/drain-containing material 22 is one or more of an elemental metal, a mixture or alloy of two or more elemental metals, a conductive metal compound, and a conductive doped semiconductive material, with conductive doped polysilicon being one specific example. Example thicknesses of materials 18, 20, and 22 are 150 to 350 angstroms, 400 to 900 angstroms, and 2,000 to 4,000 angstroms, respectively.In the present invention, "thickness" itself (without a directional adjective in front) is defined as the average straight-line distance perpendicularly through a given material or region from the closest surface of the adjacent material or adjacent region of different composition. In addition, the various materials or regions described herein may have a substantially constant thickness or a variable thickness. If there is a variable thickness, the thickness refers to the average thickness, unless otherwise indicated, and since the thickness is variable, this material or region will have a certain minimum thickness and a certain maximum thickness. As used herein, for example, if such materials or regions are heterogeneous, then "different composition" only requires that those parts of the two materials or regions that can directly abut against each other are chemically and/or physically different. If the two materials or regions are not directly against each other, then "different composition" only requires: if such materials or regions are heterogeneous, then those parts of the two materials or regions that are closest to each other are chemically and/or physically different. In the present invention, when materials, regions or structures are in at least some physical contact contact with each other, the materials, regions or structures are "directly against" each other. In comparison, the words “above,” “on,” “adjacent,” “along,” and “against” without the word “directly” in front of them encompass “directly against” and configurations in which (several) intervening materials, (several) regions, or (several) structures result in non-physical contact between the materials, regions, or structures relative to each other.Referring to Figure 3, and in one embodiment, a first portion of a first sacrificial masking step is shown. In the context of this invention, a "sacrificial masking step" is a patterning technique that uses a combination of masking materials patterned over a substrate material followed by removal (eg, by etching) of the substrate material not covered by the masking material, and wherein At least the uppermost portion of the masking material is sacrificial and thus ultimately removed from above the substrate. The masking material may include a lowermost portion that remains part of the completed circuit construction. Alternatively, all sacrificial masking material may be completely removed. For example, construction 10 in Figure 3 includes a first sacrificial mask 23 that includes masking material 25 that has been patterned on top of substrate material 22/20/18/12. Masking material 25 may comprise photosensitive imaging material, or other materials with or without one or more other hard masking or other material layers. Example techniques for forming mask 23 include photolithographic patterning with or without pitch multiplication.FIG. 4 shows an example completion of the first sacrificial masking step, whereby at least some of the exposed material vertically inward of mask 23 has been removed. Specifically, FIG. 4 shows that first sacrificial mask 23 has been used to pattern digit line material 18, channel-containing material 20 thereover, and source/drain-containing material 22, e.g., along first direction 26, to form columns 15 of digit lines 28 within array 14, with lines 29 containing channel material 20 and lines 30 containing source/drain material 22 located thereover. This may be done, for example, using any suitable existing or to-be-developed anisotropic etch chemistry that selectively etches materials 18, 20, and 22 relative to at least a lower portion of masking material 25. In the present invention, a selective etch or removal is one in which one material is removed relative to another such material at a ratio of at least 2.0:1. As shown, this has formed trenches 32 laterally located between digit line 28 and lines 29, 30 above it. For simplicity and ease of description, only two sets of lines 28/29/30 are shown, but thousands, tens of thousands, etc. of lines may be formed in the array 14 along the direction 26. In addition, such lines are shown as being straight and linear with respect to the direction 26 and the columns 15, but curved, non-parallel, a combination of curved and straight segments, etc. may be used.Referring to FIG. 5 , in one embodiment, all masking material 25 (not shown) has been removed and first material 34 has been formed in trench 32 . Example techniques include depositing first material 34 sufficiently to fill and overfill trench 32 and then planarizing material 34 back (eg, by CMP) to at least the uppermost surface containing source/drain material 22 . Material 34 is ideally dielectric, especially where this is not completely sacrificial, of which silicon nitride and/or doped or undoped silicon dioxide are examples.Referring to FIG. 6 , a second sacrificial mask 36 including masking material 27 has been formed over materials 22 and 34 and patterned in a second direction 38 that is different from first direction 26 . The same or different material(s) and/or technology(s) described above may be used to form the first sacrificial mask 23 .Referring to FIG. 7 , a second sacrificial mask 36 (not shown) has been used to pattern channel-containing material 20 and source/drain-containing material 22 along a second direction 38 to form vertically extending pillars 40 (in one embodiment , vertical pillars), and includes individual channels 42 and individual upper source/drain regions 44 of individual transistors formed within individual memory cells within array 14 . The first material 34 is located laterally between the struts 40 . In one embodiment, the strut 40 may be considered a first vertically extending strut in which the first material 34 forms a second vertically extending strut 46 , wherein such first vertically extending strut 40 and second vertically extending strut 46 respectively alternate with respect to each other along row 17 (i.e., alternate within row 17). Moreover, for simplicity and convenience of description, only four rows 17 are shown, but thousands, tens of thousands, etc. of rows 17 may be formed within the array 14 along the direction 38, resulting in hundreds of thousands, millions of pillars 40, etc. . Additionally and regardless, rows 17 are shown as being straight linear with respect to direction 38 , but curved, non-parallel, combined configurations of curved and straight segments, etc. may be used. Construction 10 may be considered to include trenches 21 between rows 17 . In one embodiment, all sacrificial masking material 27 has been removed from the substrate during and/or after removal of materials 22, 20, and/or 18 (not shown).Accordingly, and in one embodiment, the process described above with respect to Figures 1-7 is simply one of patterning the digit line material, the channel-containing material, and the source/drain-containing material to form the digit lines within the array and An example technique is to form vertically extending pillars that include individual channels and individual upper source/drain regions of individual transistors of individual memory cells within the array. In one such embodiment, patterning includes subtractive etching using more than one sacrificial masking step within the array, and in one embodiment using no more than two sacrificial masking steps within the array.Alternatively or additionally considered, the above process is but one example technique for forming pillars 40 extending vertically upward from digit line 28, wherein pillars 40 individually include individual channels 42 and individual upper source/drain regions 44 for individual transistors of individual memory cells within array 14. Alternatively or additionally considered, the above process is but one example technique for using a second sacrificial mask to pattern channel-containing material in a second direction different from the first direction to cut the line of channel-containing material above the digit line into spaced-apart individual channels for individual transistors of individual memory cells within the array.To continue the discussion and with reference to FIGS. 8-10 , pillar 40 may be considered to have lateral sides 33, 35, 37, and 39. Individual channels 42 ( FIG. 9 ) may be considered to include lateral sides 41, 43, 45, and 47 that are part of pillar sides 33, 35, 37, and 39, respectively. Upper source/drain regions 44 ( FIG. 10 ) may be considered to include lateral sides 49, 51, 53, and 55 that are also part of pillar sides 33, 35, 37, and 39, respectively. Pillars 40 and 46 are shown as having quadrilateral horizontal cross-sections and having four straight lateral sides. Alternative shapes including fewer, more non-straight, and/or curved lateral sides may be used.Source/drain region 44 may be considered to include top 48 (FIG. 7) and pillar 40 may be considered to include top 50 (FIG. 7). In one embodiment, the pillars 40 are formed to be conductive from the upper source/drain regions 44 to the tops 50 of the pillars 40 (eg, such that the upper source/drain regions 44 effectively extend vertically upward to the pillars Top 50, thus without pillar inner top 48). In one embodiment, pillar 40 is formed to be non-conductive from top 48 to pillar top 50 . In one such embodiment, pillar 40 is formed to be insulating from top 48 to pillar top 50 , and in another embodiment, is formed to be semiconductive from top 48 to pillar top 50 .An access line is formed laterally across and operatively laterally adjacent to a lateral side of an individual transistor channel (e.g., in the depicted embodiment, at least one of channel sides 49, 51, 53, and 55). When so adjacent, this includes the portion of the access line that effectively forms the access gate of the individual transistor. The access line may individually completely surround (not shown) the respective individual transistor channel or may be located only over a portion of the circumference of such a channel, e.g., only over opposing lateral sides of a transistor channel. An example method of forming an access line is described with reference to FIGS. 11-15.Referring to FIG. 11 , a gate insulator 52 (eg, silicon dioxide, silicon nitride, high-k dielectric, iron electrical materials, etc.), first laterally opposed sides 35 , 39 operatively laterally adjacent a pair of first laterally opposed sides 43 , 47 of an individual channel 42 within array 14 and between laterally adjacent rows of struts 40 (e.g. between adjacent pillars in a row). Access gate material 54 (e.g., TiN) has been formed over gate insulator 52 , including over pillar top 50 , over first laterally opposed sides 35 , 39 of pillar 40 and a pair of second pairs of channel 42 Above one laterally opposed side 43, 47 and between laterally adjacent rows of posts 40.Referring to FIG. 12 , and in one embodiment, the access gate material 54 has been subjected to a maskless anisotropic etch (i.e., at least throughout the array 14 without a mask), and in one embodiment relative to the gate The insulator 52 is selectively maskless anisotropically etched to remove material 54 from above the pillar tops 50 and interconnecting between adjacent pillars 40 in the lateral rows. Gate insulator 52 may also be so removed (not shown) during or after this maskless anisotropic etch of access gate material 54 .13, at least the lower portions of the trenches 21 between the rows 17 of pillars 40 have been intercalated with a sacrificial material 56 (eg, photoresist). This may be done by depositing the material 56 followed by a timed etch back, as shown.Referring to FIG. 14 , access gate material 54 has been removed (eg, by timing etching) back, and in one embodiment is selectively removed relative to sacrificial material 56 and gate insulator 52 as shown Pole Material 54. In the depicted embodiment, this has resulted in access lines 58 being formed laterally across and operatively laterally adjacent the lateral sides of individual transistor channels 42 and, thus, individual transistors 19 . Those portions of individual access lines 58 so adjacent effectively form the access gates of individual transistors 19 . The respective uppermost portions of digit lines 28 located directly below individual channels 42 may serve as individual lower source/drain regions of individual transistors 19 . In one embodiment and as shown, the individual access lines 58 are in the form of pairs of access lines 59, 60 that span laterally across first laterally opposed sides 35, 39 of the strut 40, the first laterally opposed sides 35, 39. 39 is operatively laterally adjacent to the first laterally opposed sides 43, 47 of the individual channels 42 within the array 14. In one embodiment, access line pairs 59, 60 are located in respective individual column lines 17 and interconnect the transistors in that row. In one embodiment and in accordance with the process described above, the maskless anisotropic etching of the access gate material is performed in at least two time-spaced etch steps (eg, FIGS. 12 and 14 ), and in one embodiment , the lower portions of the trenches 21 between the rows 17 of pillars 40 are inserted with sacrificial material 56 during a later step of the maskless anisotropic etching step (Fig. 14). Example lateral thicknesses for each of 59 and 60 are 30 Angstroms to 75 Angstroms. Figure 14 shows only four transistors 19 along the column side of array 14, but thousands, tens of thousands, etc., would extend along individual columns and individual rows to produce hundreds of thousands, millions of transistors within array 14 wait. Figure 15 shows the subsequent removal of sacrificial material 56 (not shown).In one embodiment, in another sacrificial masking step or otherwise, respective access line pairs 59 and 60 of individual access lines 58 may be electrically coupled (in one embodiment, directly electrically coupled) relative to one another outside of array 14, within peripheral region 16 (where only peripheral region 16 is shown in FIG. 1 ). In the present invention, regions/materials/components are “electrically coupled” to one another if, in normal operation, electrical current is able to flow continuously from one region/material/component to another, and are electrically coupled to one another primarily by moving subatomic positive and/or negative charges when sufficient subatomic positive and/or negative charges are generated. Another electronic component may be located between the regions/materials/components and may be electrically coupled to the regions/materials/components. In contrast, when regions/materials/components are referred to as “directly electrically coupled,” then no intervening electronic components (e.g., no diodes, transistors, resistors, transducers, switches, fuses, etc.) are located between the regions/materials/components that are directly electrically coupled.Referring to FIG. 16 , a second material 62 has been formed in the trench 21. The material 62 may have any suitable composition, wherein at least the lower portion thereof is insulating if the material 62 is not completely sacrificed. If the gate insulator 52 remains on the base of the trench 21 between row-adjacent access lines 58 and/or above the access lines 58, the gate insulator 52 effectively becomes part of the second material 62 and may have the same or different composition than the original second material 62, and has the same composition as shown in FIG. 16 . In one embodiment, the first material 34 and the second material 62 are formed to have the same composition as one another, and in another embodiment, are formed to have different compositions from one another.The result is a capacitor that individually has one of its capacitor electrodes directly against the lateral side of the upper source/drain region of one of the individual transistors of an individual memory cell within the array. In one embodiment, the one capacitor electrode is individually formed directly against less than the entire lateral side of the respective upper source/drain region, and in one embodiment, is formed directly against the respective upper source/drain region Less than half of the area is lateral. In one embodiment, the capacitor is formed directly against a first laterally opposed side (and in one such embodiment, no more than two laterally opposed sides) of the respective upper source/drain region. Individual capacitor electrodes in electrodes. In one embodiment, individual upper source/drain regions may be considered to have fully surrounding peripheral lateral side surfaces, and wherein each one of the capacitor electrodes directly abuts all of the fully surrounding periphery of the individual upper source/drain region Lateral side surface. Two example embodiments of forming a capacitor are next described with reference to Figures 17-24.17 , first material 34 and second material 62 have been sufficiently removed to expose surrounding peripheral lateral sides 49 , 51 , 53 , and 55 (side 49 not visible in FIG. 17 ) of respective upper source/drain regions 44 .18 and 19, a layer containing a first capacitor electrode material 63 (e.g., TiN) has been formed over the top 50 and first laterally opposing sides 35, 39 of the pillars 40, which are directly against a pair of first laterally opposing sides 51, 55 of individual upper source/drain regions 44 and are located between laterally adjacent pillars 40. In one embodiment and as shown, the layer containing the first capacitor electrode material 63 completely surrounds and directly against all surrounding peripheral lateral sides 49, 51, 53 and 55 of the individual upper source/drain regions 44 within the array 14 (side 49 is not visible in FIGS. 18 and 19). An example thickness of the material 63 is 25 to 50 angstroms.20 and 21, and in one embodiment, the first capacitor electrode material 63 has been subjected to a maskless anisotropic etch (i.e., maskless at least within the entire array 14) to remove the first capacitor electrode material 63 from above the pillar tops 50 and interconnecting between adjacent pillars 40 in lateral rows. Thus, and in one embodiment, the first capacitor electrode 64 has been formed to completely surround and directly abut all surrounding peripheral lateral sides of the individual upper source/drain regions within the array.22, capacitor insulator 66 has been formed over first capacitor electrodes 64 and second capacitor electrodes 68 have been formed over capacitor insulator 66 within array 14, thus forming individual capacitors 75 and individual memory cells 85. In one embodiment and as shown, second capacitor electrode 68 is single and shared by capacitors 75 within array 14. The material of second capacitor electrode 68 may have the same or different composition than that comprising first capacitor electrode material 63. In one embodiment and as shown, capacitor insulator 66 completely surrounds individual first capacitor electrodes 64, and second capacitor electrodes 68 completely surround capacitor insulator 66 around pillars 40 within array 14.Example capacitor insulator materials include SiO2, Si3N4, and/or high-k dielectrics and thus the capacitor is non-volatile. Alternatively, in other example embodiments, capacitor insulator 66 includes a programmable material such that the capacitor is formed to be non-volatile and programmable into at least two different magnitude capacitance states (e.g., such that the programmable material is thick enough and remain insulated in different states so that current sufficient to erase the memory state does not flow through the programmable material at the operating voltage). Such example programmable materials include ferroelectric materials, conductive bridge RAM (CBRAM) materials, phase change materials, and resistive RAM (RRAM) materials, of which ferroelectrics are considered ideal. Example ferroelectric materials include ferroelectrics with one or more of transition metal oxides, zirconium, zirconium oxide, niobium, niobium oxide, hafnium, hafnium oxide, lead zirconium titanate, and barium strontium titanate, and may have Dopants including one or more of silicon, aluminum, lanthanum, yttrium, erbium, calcium, magnesium, strontium and rare earth elements. In one embodiment, capacitor insulator 66 contains a dielectric material such that the capacitor is volatile. For example, this may include one or more of non-programmable dielectric materials (e.g., silicon dioxide, silicon nitride, alumina, high-k dielectrics, etc.) whereby one or both of the two capacitor electrodes of the slave capacitor After the voltage/potential is removed or sufficiently reduced, no charge remains in the material 66. The non-volatile programmable capacitor may have a capacitor insulator with a suitable combination of programmable material(s) and non-programmable material(s). Regardless, an example thickness of capacitor insulator 66 is 30 Angstroms to 100 Angstroms.Any material that can be doped with a suitable conductivity-improving dopant to provide a selected conductivity, such as materials 20 and 22, may be so doped as deposited and/or subsequently.Any other attribute(s) or aspect(s) as described and/or shown herein may be used in the embodiments described above with reference to FIGS. 1-22 .17 to 22 depict an embodiment in which the first capacitor electrode 64 has been formed with its respective top 65 flat and vertically coinciding with its respective flat top 50 surrounding the post 40 . Alternatively and for example, the first capacitor electrode may be formed so that its respective top is non-vertically coincident with the top of its respective surrounding strut, such as described next with reference to the alternative embodiment configuration 10a shown in Figures 23 and 24. Similar symbols from the above embodiments have been used where appropriate, with the suffix "a" being used or different symbols being used to indicate some construction differences. Construction 10a in Figure 23 shows the processing immediately following Figures 20 and 21 shown in place of Figure 22. Specifically, after the first capacitor electrode 64 is formed, the material 22 of the pillar 40 has been selectively removed relative to the first capacitor electrode 64 before forming the capacitor insulator. 23 shows removing material 22 back to the top 48 of the upper source/drain region 44. Alternatively, in one embodiment, some upper material of upper source/drain region 44 may be removed (not shown) as long as some of its lateral side surfaces remain laterally against corresponding first capacitor electrode 64 . Alternatively, material 22 of pillar 40 may not be removed down to top 48 of source/drain region 44 (not shown). In one embodiment in which the material 22 of the pillars 40 is selectively removed, this removal removes more than half of the pillar material 20/22 before forming the capacitor insulator.Regardless, Figure 24 shows a subsequent process in which capacitor insulator 66a and second capacitor electrode 68a have been formed laterally over the radially inner and radially outer portions of a majority of first capacitor electrode 64, thus forming capacitor 75a and memory cell 85a. 24 shows an example embodiment in which the first capacitor electrode 64 has been formed with its top 65 higher than its top 50 surrounding the post 40. In one embodiment and as shown, capacitor insulator 66a is formed directly against top 48 surrounding individual source/drain regions 44. In one embodiment and as shown, capacitor insulator 66a is formed laterally over all radially outer sides of individual first capacitor electrodes 64 and only laterally over some radially inner sides of individual first capacitor electrodes 64. In one embodiment and as shown, Figures 23 and 24 may be viewed as forming the first capacitor electrode 64 in the form of a cylinder having a radially inner side and a radially outer side. The capacitor insulator 66 a is located above the radially outer and radially inner sides of the first capacitor electrode cylinder 64 . The second capacitor electrode 68 a is located above the capacitor insulator 66 a and above the radially outer and radially inner sides of the first capacitor electrode cylinder 64 . Any other attribute(s) or aspects(s) as described and/or illustrated herein may be used.The above embodiments described with respect to Figures 1-22 are example method embodiments that lack etching material of the pillars after forming the first capacitor electrode, while the embodiments described above with respect to Figures 23 and 24 etch material of the pillars after forming the first capacitor electrode.Additional example embodiments are next described with respect to construction 10b as shown in Figures 25-35. Similar symbols from the above embodiments have been used where appropriate, with the suffix "b" used or the use of different symbols to indicate some construction differences. Figure 25 is similar to Figure 16 as described above. In FIG. 16 , the first material 34 and the second material 62 may comprise the same or different composition materials, where FIG. 16 shows the same composition through the dashed interface between the first material 34 and the second material 62 . In Figure 25, first material 34 and second material 62 are shown to have different compositions by the solid line interface therebetween. Regardless, in the embodiment of Figures 25-35, one method of forming an array of memory cells individually including transistors and capacitors includes forming alternating first and second vertically extending legs 40 and 46, respectively. First pillars 40 extend vertically upward from digit lines 28 and individually include individual channels 42 and individual upper source/drain regions 44 of individual transistors of individual memory cells within array 14 . Gate insulator 52 and access line 58 are formed laterally across the lateral sides of first and second pillars 40 and 46 operatively laterally adjacent the respective transistor channels 42 Lateral side. In one embodiment and as shown, access lines 58 may be in the form of access line pairs 59 , 60 operatively laterally adjacent to individual first pillars 40 within array 14 A pair of first laterally opposed sides 43 , 47 of the channel 42 .Referring to FIG. 26 , second material 62 has been selectively removed (eg, by timed etching) back relative to pillar material 22 and first material 34 .Referring to FIG. 27 , a first capacitor electrode material 63 has been deposited and then, in one embodiment, maskless anisotropically etched (i.e., maskless at least within the entire array 14 ) to form a first capacitor electrode material 63 from above and laterally located above the pillar tops 50 The first capacitor electrode-containing material 63 between the first pillar 40 and the second pillar 46 is removed. This is shown forming a first capacitor electrode pair 67 laterally across the first and second legs 40 , 46 . The first capacitor electrode line pair 67 directly abuts the first laterally opposed sides 51 , 55 of the respective upper source/drain regions 44 of the respective first pillars 40 within the array 14 .28 and 29 (top view), capacitor insulator 66 has been formed over first capacitor electrode line pair 67 and second capacitor electrode material 68 has been formed over capacitor insulator 66 within array 14 .30 and 31 (top views), materials 68 and 66 have been planarized back sufficiently to expose first material 34 of pillars 46 upwardly (FIG. 31).Referring to Figure 32, this is a cross-sectional view of the construction of Figure 31 in a processing step subsequent to that shown in Figure 31 and taken horizontally through the uppermost portion of the upper source/drain region 44. Material 34 (not shown) of the second pillars 46 (not shown) has been removed from the lateral sides of the first capacitor electrode line pair 67. In one embodiment and as shown, all of the material of the second pillars has been removed down to the substrate 12.Referring to FIG. 33 , the lateral sides (not shown) of first capacitor electrode line pair 67 have been cut laterally (eg, by isotropic and/or anisotropic etching) to form individual direct abutments within array 14 First capacitor electrodes 64 of first laterally opposite sides 51 , 55 of respective upper source/drain regions 44 .Referring to FIG. 34 , a suitable dielectric material (eg, material 62 as shown) has been deposited and planarized back to fill between pillars 40 (eg, intra-row fill between rows 17 ), thus effectively re-forming pillars 46 (Not marked in Figure 34).Referring to Figure 35, the upper portion containing the first capacitor electrode material 63 has been recessed/etched back (or oxidized) and an insulator material 70 (eg, silicon dioxide or silicon nitride) has been formed over the upper portion. Alternatively, as an example, material 63 is recessed before dielectric material 62 is deposited in Figure 34, where such dielectric material 62 then fills (not shown) such recesses. Additional second capacitor electrode material 68 has been deposited, forming capacitor 75 and memory cell 85 . Any other attribute(s) or aspects(s) as described and/or illustrated herein may be used.The embodiments described above with respect to FIGS. 25-35 provide a capacitor insulator over the first capacitor electrode and a second capacitor electrode over the capacitor insulator prior to the action of cutting laterally through the lateral sides of the first capacitor electrode line pair. Alternatively, the capacitor insulator and second capacitor electrode material may be provided after this cutting action, such as shown and described with respect to construction 10c with respect to FIGS. 36-42. Similar symbols from the above embodiments have been used where appropriate, with the use of a suffix "c" or the use of different symbols to indicate some construction differences.36 and 37, construction 10c corresponds to processing of the substrate of FIG27 to produce a construction 10c that is different from the construction shown in FIG28 and 29. Specifically, in FIG36 and 37, dielectric material 62 has been deposited and planarized back to fill trenches 21. Then, material 34 (not shown) of second pillars 46 (not shown) has been removed from the lateral sides of first capacitor electrode line pair 67 (e.g., instead of and before depositing capacitor materials 66 and 68).38 , the lateral sides of the first capacitor electrode line pair 67 (not shown) have been cut laterally through to form the first capacitor electrode 64 .Referring to FIG. 39 , pillars 46 (not numbered in FIG. 34 ) have been effectively reformed by depositing and planarizing back dielectric material, such as where dielectric material 34 (eg, silicon nitride) is shown.Referring to Figure 40, second material 62 has been etched back selectively relative to other exposed materials.41 and 42, a capacitor insulator 66 and a second capacitor electrode 68 are then formed. Any other property(s) or aspect(s) as described and/or shown herein may be used.The above processes and figures show the fabrication of a layer/level/stratum of, for example, a memory cell array. Such additional layers/levels/strata may be provided or fabricated above or below the one depicted in the figure. Alternatively, only a single such layer/level/stratum may be fabricated.Regardless, embodiments of the present invention encompass a method of forming a layer of an array of memory cells within an array area, wherein the memory cells individually include transistors and capacitors, wherein the method includes forming the memory cells within the array area of the layer using two and only two sacrificial masking steps. Each of the above-described embodiments is merely an example of such a method. Specifically, and for example, Figures 3 to 5 are one example of such sacrificial masking steps (e.g., Figure 5 extending to at least an upper portion of material 25 removed) and the processing shown and described above with respect to Figures 6 and 7 is another example sacrificial masking step. In the above-described example embodiments and according to one embodiment of this section, no other sacrificial masking steps are present within the array area 14 of the depicted layer in which the individual memory cells are formed. This can be facilitated by forming circuit components in a self-aligned manner. In the present invention, "self-aligned" means a technique in which at least a lateral surface of a structure is defined by depositing material against the sidewalls of a previously patterned structure.In this embodiment, in each of two sacrificial masking steps, exposed channel-containing material of the transistors within the array region is subtractively etched while using unexposed portions of this channel-containing material within the array region. The sacrificial masking material partially over (e.g., completely covered) serves as a mask. In one such embodiment, material is subtractively etched within the array region in each of two sacrificial masking steps, but neither masking step includes material of the capacitors within the array region. For example, neither of the masking steps shown in FIGS. 3-5 or 6-7 etch the material of the capacitors within array region 14, at least because this has not yet been formed.In one embodiment, a sequential first of the two masking steps includes using a sacrificial masking material to mask the digit line material while subtractively etching the exposed digit line material below the sacrificial masking material and ultimately in the array area The transistors and capacitors formed within form the digital lines below. In one embodiment, the gate material of the transistor is etched to form the transistor gate, and all material forming the capacitor is etched to form the capacitor within the array area during such etching without any masking material being formed over the capacitor.In one embodiment, a method according to the present invention includes forming individual memory cells as 1T-1C. These individual memory cells may be characterized by having only one transistor and only one capacitor and no other/additional operable electronic components (e.g., no other selection devices, etc.), and may also include interconnecting the transistors and capacitors together and the individual memory cells A conductive material that interconnects cells to other components outside of individual memory cells.Embodiments of the present invention also encompass forming individual memory cells as 2T-2C. Such memory cells are characterized by having only two transistors and only two capacitors and no other operable electronic components (e.g., no other selection devices, etc.), and may also include conductive materials that interconnect the two transistors and two capacitors together and interconnect the individual memory cells to other components outside the individual memory cells. In FIG. 43 , the 2T-2C memory cell architecture is schematically shown as memory cell 2. The two transistors of the memory cell are labeled T1 and T2, and the two capacitors are labeled CAP-1 and CAP-2. The source/drain region of the first transistor T1 is connected to the node of the first capacitor (CAP-1), and the other source/drain region of T1 is connected to the first comparative bit line (BL-1). The gate of T1 is connected to the word line (WL). The source/drain region of the second transistor T2 is connected to the node of the second capacitor (CAP-2), and the other source/drain region of T2 is connected to the second comparative bit line BL-2. The gate of T2 is connected to the word line WL. Each of the first and second capacitors (CAP-1 and CAP-2) has a node electrically coupled to a common plate (CP). The common plate may be coupled to any suitable voltage. Comparative bit lines BL-1 and BL-2 extend to circuit 4, which compares the electrical properties (e.g., voltage) of the two comparative bit lines to confirm the memory state of memory cell 2. An advantage of a 2T-2C memory cell is that the memory state can be confirmed by comparing the electrical properties of the two comparative bit lines BL-1 and BL-2 to each other. Therefore, reference bit lines associated with other memories (e.g., 1T-1C memories) may be omitted. In this embodiment, BL-1 and BL-2 may be electrically coupled to the same sense amplifier as part of circuit 4.An alternative embodiment architecture to that of Figure 22 is shown in Figure 44, which may include a 2T-2C architecture like the architecture shown in Figure 43. The same notations from the above embodiments have been used where appropriate, with the suffix "d" being used to indicate some construction differences. Construction 10d includes individual memory cells 85d of a 2T-2C architecture and may be volatile or non-volatile depending on the composition of the capacitor insulator. Laterally adjacent transistor pairs 19 are shown with their respective gates directly electrically coupled together to contain one 2T-2C memory cell 85d of the array. This is schematically illustrated in Figure 44 by conductive interconnects 79 extending to nodes 80 for the two such individual pairs depicted. Interconnects 79 and nodes 80 may be configured (not shown) within and/or outside the plane of the page of FIG. 44 and within and/or outside of array 14 . For BL-1 and BL-2, digit line 28d (or an extension thereof) is reconfigured as shown and according to the schematic diagram of Figure 43. Any other attribute(s) or aspects(s) as described and/or illustrated herein may be used.Embodiments of the invention contemplate an array (eg, 14) of memory cells (eg, 85, 85a, 85d) independent of the manufacturing method. However, independent of the manufacturing method, the memory cell array may have any of the property(s) or aspects(s) as described and/or illustrated above. This array includes rows (eg, 17) of access lines (eg, 58) and columns (eg, 15) of digit lines. Individual ones of the columns include digital lines (eg, 28) that extend vertically below the channels (eg, 42) of the transistors (eg, 19) of the individual memory cells within the array and interconnect the transistors in the column. Individual ones of the rows include access lines located above the digit lines, where the access lines extend laterally across lateral sides of the transistor channels (e.g., 41, 43, 45, and/or 47) and are operatively laterally adjacent to the lateral sides and interconnect the transistors in the rows. In one embodiment, this access line includes an access line pair (eg, 59, 60) that laterally spans and is operatively laterally adjacent to a transistor channel interconnecting the transistors in the row A pair of first laterally opposed sides (eg, 43, 47) extend.The capacitors (e.g., 75, 75a) of individual memory cells within the array individually include first capacitor electrodes (e.g., 64) directly against lateral sides (e.g., 49, 51, 53, and/or 55) of upper source/drain regions (e.g., 44) of individual ones of the transistors within the array. In one such embodiment, the first capacitor electrodes directly against a pair of first laterally opposing sides (e.g., 51, 55, and/or 49, 53) of upper source/drain regions of individual transistors within the array. In one such embodiment, the first capacitor electrodes directly against no more than two laterally opposing sides of upper source/drain regions of individual transistors within the array. In this other embodiment, the individual upper source/drain regions have completely surrounded peripheral lateral side surfaces (e.g., 49, 51, 53, 55), wherein the individual first capacitor electrodes directly against the entire completely surrounded peripheral lateral side surfaces of the individual upper source/drain regions.A capacitor insulator (eg, 66) is located above the first capacitor electrode and a second capacitor electrode (eg, 68) is located above the capacitor insulator. In one embodiment, the capacitor insulator includes ferroelectric material.In one embodiment, individual memory cells include a pillar (e.g., 40) extending vertically above a digit line. The pillar includes one of the transistor channels and the upper source/drain region of an individual one of the transistors. In one embodiment, this pillar has a vertical thickness that is at least three times the thickness of one transistor channel. In one embodiment, the pillar is formed to be conductive from the upper source/drain region to the top of the pillar (e.g., 50). In one embodiment, the pillar is formed to be non-conductive from the top of the upper source/drain region (e.g., 48) to the top of the pillar. In one such embodiment, the pillar is formed to be insulating from the top of the upper source/drain region to the top of the pillar, and in another embodiment, is formed to be semi-conductive from the top of the upper source/drain region to the top of the pillar.In one embodiment, the first capacitor electrode has a top that is flat and vertically coincident with the flat top of its pillar (e.g., 65). In another embodiment, the first capacitor has a top that is non-vertically coincident with the top of its pillar. In one embodiment, the first capacitor electrode top is flat. In one embodiment, the first capacitor top is located vertically outside the top of its pillar. In one embodiment, the first capacitor electrode directly abuts no more than two laterally opposing sides of the upper source/drain region of the corresponding individual transistor within the array. In one embodiment, the individual upper source/drain region has a completely surrounding peripheral lateral side surface, wherein the individual first capacitor electrode directly abuts the entire completely surrounding peripheral lateral side surface of the individual upper source/drain region.In one embodiment, the capacitors of individual memory cells within the array individually include an upwardly open and downwardly open first capacitor electrode cylinder (e.g., 64) that completely surrounds and directly abuts all peripheral lateral sides of the upper source/drain regions of individual ones of the transistors within the array. In one such embodiment, the capacitor insulator and the second capacitor electrode are located above most of the radially inner side and most of the radially outer side of the first capacitor electrode cylinder. In one embodiment, the capacitor insulator is located laterally above all of the radially outer side of the first capacitor electrode cylinder and laterally above only some of the radially inner side of the first capacitor electrode cylinder. In one embodiment, the capacitor insulator directly abuts the top of the surrounded individual upper source/drain region. In one embodiment, the capacitor insulator directly abuts the top of the first capacitor electrode cylinder. In one embodiment, the capacitor insulator directly abuts the top of the surrounded individual upper source/drain region and directly abuts the top of the first capacitor electrode cylinder.In one embodiment, the individual memory cells are 1T-1C, and in another embodiment 2T-2C. However, individual memory cells may have any existing or to-be-developed schematic including at least one transistor and capacitor.in conclusionIn some embodiments, a method of forming a layer of an array of memory cells within an array region, wherein the memory cells individually include capacitors and vertically extending transistors, includes using two And only two sacrificial masking steps are used to form the memory cells.In some embodiments, a method of forming an array of memory cells individually including capacitors and vertically extending transistors includes using a first sacrificial mask to pattern a digit line material and a channel-containing material thereover in a first direction to Digit lines are formed within the array such that the line of channel material is located above the digit lines. The channel-containing material is patterned in a second direction different from the first direction using a second sacrificial mask to cut the line of channel-containing material over the digit line into the The individual channels of individual transistors of individual memory cells within the array are spaced apart. Gate insulators and access lines are formed laterally across and operatively laterally adjacent lateral sides of the individual transistor channels. Capacitors are formed that individually have one of their capacitor electrodes directly against a lateral side of an upper source/drain region of one of the individual transistors of the individual memory cells within the array.In some embodiments, a method of forming a layer of an array of memory cells that individually include capacitors and vertically extended transistors includes forming a digit line material above a substrate, forming a channel-containing material above the digit line material, and forming a source/drain-containing material above the channel-containing material. Patterning the digit line material, the channel-containing material, and the source/drain-containing material to form digit lines within the array and to form vertically extended pillars, the pillars including individual channels and individual upper source/drain regions of individual transistors of individual memory cells within the array. Forming a gate insulator and an access line laterally across and operatively laterally adjacent to the lateral sides of the individual transistor channels. Forming a first capacitor electrode in the pillars directly against a pair of first laterally opposed sides of the individual upper source/drain regions within the array. Forming a capacitor insulator above the first capacitor electrode and forming a second capacitor electrode above the capacitor insulator within the array.In some embodiments, a method of forming an array of memory cells that individually include capacitors and vertically extending transistors includes forming pillars extending vertically upward from a digit line, the pillars individually including individual channels and individual upper source/drain regions of individual transistors of individual memory cells within the array. A gate insulator and access line are formed laterally across and operatively laterally adjacent to the lateral sides of the individual transistor channels. A first capacitor electrode is formed that completely surrounds and directly abuts all peripheral lateral sides of the individual upper source/drain regions within the array. A capacitor insulator is formed over and completely surrounds individual first capacitor electrodes of the first capacitor electrodes and a second capacitor electrode is formed over and completely surrounds the surrounding capacitor insulator within the array.In some embodiments, a method of forming an array of memory cells that individually include capacitors and vertically extended transistors includes forming a digit line material above a substrate, forming a channel-containing material above the digit line material, and forming a source/drain-containing material above the channel-containing material. The digit line material, the channel-containing material, and the source/drain-containing material are patterned in a first direction to form a digit line within the array so that the line of the channel-containing material and the line of the source/drain-containing material are located above the digit line. A first material is formed in a trench laterally located between the digit line and the line above it within the array. The channel-containing material, the source/drain-containing material, and the first material are patterned in a second direction different from the first direction to form vertically extended pillars that include individual channels and individual upper source/drain regions of individual transistors of individual memory cells within the array and so that the first material is located laterally between the pillars. A gate insulator and access line pair are formed laterally across a pair of first laterally opposing sides of the pillar, the pair of first laterally opposing sides being operatively laterally adjacent to a pair of first laterally opposing sides of the individual channels within the array. A second material is formed in a trench laterally between the pillar and the first material within the array. The first material and the second material are sufficiently removed to expose surrounding peripheral lateral sides of the individual upper source/drain regions. First capacitor electrodes are formed that completely surround and directly abut all of the surrounding peripheral lateral sides of the individual upper source/drain regions within the array. Capacitor insulators are formed over and completely surround individual first ones of the first capacitor electrodes, and second capacitor electrodes are formed over and completely surround the surrounding capacitor insulators within the array.In some embodiments, a method of forming an array of memory cells individually including capacitors and vertically extending transistors includes forming alternating first and second vertically extending pillars, the first pillars extending from digit lines Extending vertically upward and individually containing individual channels and individual upper source/drain regions of individual transistors of individual memory cells within the array. Gate insulators and access lines are formed laterally across and operatively laterally adjacent lateral sides of the individual transistor channels. A first capacitor electrode pair is formed laterally across the first and second pillars. The first capacitor electrode line pair directly abuts a first pair of laterally opposed sides of the respective upper source/drain regions of the respective first pillars within the array. The material of the second pillar is removed from the lateral side of the first capacitor electrode line pair and then laterally cut through the lateral side of the first capacitor electrode line pair to form a first capacitor electrode, the A capacitor electrode is individually directly abutted against the first laterally opposed side of the respective upper source/drain region within the array. A capacitor insulator is provided above the first capacitor electrode and a second capacitor electrode is provided above the capacitor insulator within the array.In some embodiments, an array of memory cells individually includes capacitors and vertically extending transistors, and wherein the array includes rows of access lines and columns of digit lines, the array having individual ones of the columns including digit lines located below the channels of the vertically extending transistors of individual memory cells within the array and interconnecting the transistors in the columns. Individual ones of the rows include access lines located above the digit lines. The access lines extend laterally across and operatively laterally adjacent to lateral sides of the transistor channels and interconnect the transistors in the rows. The capacitors of the individual memory cells within the array individually include first capacitor electrodes directly against lateral sides of upper source/drain regions of individual ones of the transistors within the array. A capacitor insulator is located above the first capacitor electrode. A second capacitor electrode is located above the capacitor insulator.In some embodiments, an array of memory cells individually includes capacitors and vertically extending transistors, and wherein the array includes rows of access lines and columns of digit lines, the array having individual columns of the columns, It includes digital lines located below the channels of vertically extending transistors of individual memory cells within the array and interconnecting the transistors in the columns. Individual ones of the rows contain access lines located above the number lines. The access lines extend laterally across and operatively laterally adjacent lateral sides of the transistor channels and interconnect the transistors in the row. The capacitors of the individual memory cells within the array individually include a first pair of first laterally opposed sides directly abutting an upper source/drain region of an individual one of the transistors within the array. capacitor electrodes. A capacitor insulator is located above the first capacitor electrode. A second capacitor electrode is located above the capacitor insulator.In some embodiments, an array of memory cells individually includes capacitors and vertically extending transistors, and wherein the array includes rows of access lines and columns of digit lines, the array having individual ones of the columns including digit lines located below the channels of the vertically extending transistors of individual memory cells within the array and interconnecting the transistors in the columns. Individual ones of the rows include access lines located above the digit lines. The access lines extend laterally across and operatively laterally adjacent to lateral sides of the transistor channels and interconnect the transistors in the row. The individual memory cells include pillars extending vertically above the digit lines. The pillars include one of the transistor channels and an upper source/drain region of an individual one of the transistors. The pillars have a vertical thickness that is at least three times the vertical thickness of the one transistor channel. The capacitors of the individual memory cells within the array individually include first capacitor electrodes directly against a pair of first laterally opposing sides of the pillars and the upper source/drain region of the corresponding one of the individual transistors within the array. A capacitor insulator is located above the first capacitor electrode. A second capacitor electrode is located above the capacitor insulator.In some embodiments, an array of memory cells individually includes capacitors and vertically extending transistors, and wherein the array includes rows of access lines and columns of digit lines, the array having individual columns of the columns, It includes digital lines located below the channels of vertically extending transistors of individual memory cells within the array and interconnecting the transistors in the columns. Individual ones of the rows contain access lines located above the number lines. The access lines extend laterally across and operatively laterally adjacent lateral sides of the transistor channels and interconnect the transistors in the row. The capacitors of the individual memory cells within the array individually include an upwardly open and downwardly open first capacitor electrode cylinder that completely surrounds and directly abuts an upper portion of an individual one of the transistors within the array. All peripheral lateral sides of the source/drain regions. The capacitor insulator is located above the radially outer and radially inner sides of the first capacitor electrode cylinder. A second capacitor electrode is located above the capacitor insulator and above the radially outer and radially inner sides of the first capacitor electrode cylinder. |
To provide methods, apparatus, systems and articles of manufacture for configuring heterogenous components in an accelerator.SOLUTION: An example apparatus includes a graph compiler to identify a workload node in a workload and generate a selector for the workload node, and the selector to identify an input condition and an output condition of a compute building block, where the graph compiler is to, in response to obtaining the identified input condition and output condition from the selector, map the workload node to the compute building block.SELECTED DRAWING: Figure 2 |
A device that sets heterogeneous components in an accelerator, a graph compiler that identifies a workload node in a workload and generates a selector for the workload node, and the selector that identifies the input and output conditions of a computational construction block. A device that maps the workload node to the computational construction block in response to acquiring the identified input and output conditions from the selector.The apparatus according to claim 1, wherein the graph compiler identifies a second workload node in the workload and generates a second selector for the second workload node.The device according to claim 2, wherein the second selector identifies a second input condition and a second output condition of the kernel.The apparatus according to any one of claims 1 to 3, wherein the workload is a graph including the workload node acquired by the graph compiler.The apparatus according to any one of claims 1 to 3, wherein the input condition corresponds to an input requirement of the calculation construction block, and the output condition corresponds to the result of execution of the calculation construction block.The apparatus according to any one of claims 1 to 3, wherein the graph compiler generates an executable file in response to mapping the workload node to the calculation construction block.The graph compiler is a conversion layer between the workload node and the computational construction block based on the identified input and output conditions to allow mapping of the workload node to the computational construction block. The apparatus according to any one of claims 1 to 3, further comprising a plug-in for forming the above.When executed, at least one processor should identify a workload node, at least in workload, and for that workload node, generate a selector associated with a computational build block to run the workload node. To identify the input and output conditions of the computational construction block, and to map the workload node to the computational construction block in response to acquiring the identified input and output conditions. A program that has instructions to execute.The instruction, when executed, causes the at least one processor to further identify a second workload node in the workload and generate a second selector for the second workload node. The program according to claim 8.The program according to claim 8 or 9, wherein the instruction, when executed, causes the at least one processor to further identify a second input condition and a second output condition of the kernel.The program according to any one of claims 8 to 10, wherein the workload is a graph including the workload node.The program according to any one of claims 8 to 10, wherein the input condition corresponds to an input requirement of the calculation construction block, and the output condition corresponds to the result of execution of the calculation construction block.Claims 8-10, which, when executed, cause the at least one processor to generate an executable in response to further mapping the workload node to the computational construction block. The program described in any one of the items.The instruction, based on the identified input and output conditions, allows the at least one processor to map the workload node to the computational construction block when executed. The program according to any one of claims 8 to 10, which executes the formation of a conversion layer between the load node and the calculation construction block.A compilation means that identifies a workload node in a workload and generates a selection means related to a calculation construction block for executing the workload node, and an input condition and an output condition of the calculation construction block. A device having said selection means for identifying the workload node and mapping the workload node to the computational construction block in response to acquiring the identified input and output conditions.15. The apparatus of claim 15, wherein the compiling means further identifies a second workload node in the workload and generates a second selection means for the second workload node.The device of claim 16, wherein the second selection means further identifies a second input condition and a second output condition of the kernel.The apparatus according to any one of claims 15 to 17, wherein the workload is a graph including the workload node.The device according to any one of claims 15 to 17, wherein the input condition corresponds to an input requirement of the calculation construction block, and the output condition corresponds to the result of execution of the calculation construction block.A method of configuring heterogeneous components in an accelerator that identifies a workload node in a workload and, for the workload node, generates a selector associated with a computational build block to run the workload node. That, identifying the input and output conditions of the computational construction block, and mapping the workload node to the computational construction block in response to acquiring the identified input and output conditions. Method to have.20. The method of claim 20, further comprising identifying a second workload node in the workload and generating a second selector for the second workload node.The method of any of claims 20 or 21, further comprising identifying a second input condition and a second output condition of the kernel.The method according to any one of claims 20 to 22, wherein the workload is a graph including the workload node.The method of any one of claims 20-22, further comprising generating an executable file in response to mapping the workload node to the computational construction block.Forming a transformation layer between the workload node and the computational construction block based on the identified input and output conditions to allow mapping of the workload node to the computational construction block. The method according to any one of claims 20 to 22, further comprising.At least one non-temporary computer-readable storage medium that stores the program according to any one of claims 8 to 14. |
Methods and equipment for configuring heterogeneous components in acceleratorsThe present application relates generally to processing, and more specifically to methods and devices for setting heterogeneous components in accelerators.Computer hardware manufacturers develop hardware components that are used in various components of computer platforms. For example, computer hardware manufacturers include motherboards, chipset for motherboards, Central Processing Units (CPUs), Hard Disk Drives (HDDs), Solid State Drives (SSDs), And other computer components are being developed. Moreover, computer hardware manufacturers are developing processing elements known as accelerators that accelerate the processing of workloads. For example, the accelerator can be a CPU, Graphics Processing Units (GPU), Vision Processing Units (VPU), and / or Field Programmable Gate Arrays (FPGA).It is a block diagram which shows the example of the computer system which sets a heterogeneous component in an accelerator. FIG. 6 is a block diagram showing an example of a computer system including an example graph compiler and one or more example selectors. It is an example of the block diagram which shows the example of the selector in one or more selectors of FIG. This is an example of a block diagram showing the graph compiler of FIG. It is a schematic description of an example of a pipeline representing a workload performed using an example first CBB and an example second CBB. FIG. 5 is a flow chart representing a process that can be performed to implement the graph compilers, selectors and / or one or more selectors of FIGS. 2, 3 and / or 4 to generate the executable file of FIG. FIG. 5 is a flow chart showing a process that can be performed to implement the credit manager and / or configuration controller of FIG. 2 to aid in the execution of the executable of FIG. Execute the instructions of FIGS. 6 and / or 7 to implement the example graph compiler, example one or more selectors, example selectors, and / or accelerators of FIGS. 2, 3 and / or 4. It is a block diagram of an example of a structured processor platform.The figure is not exactly the actual size. Generally, the same reference number will be used throughout the drawings and accompanying specification to refer to the same or similar parts. Connection references (eg, attached, coupled, connected, and joined) should be broadly interpreted and unless otherwise indicated, a group. It may include intermediate members between the elements of, and relative motion between the elements. As such, the connection reference does not necessarily imply that the two elements are directly connected and in a fixed relationship with each other.Descriptors such as "first," "second," and "third" are used herein to identify multiple elements or components that may be referred to separately. Unless otherwise specified or understood based on their usage context, such descriptors are not intended to have a meaning of priority, physical order or arrangement in a list, or temporal order, but merely. , Used separately as labels to refer to multiple elements or components to facilitate understanding of the disclosed examples. In some examples, the descriptor with "first" may be used to refer to an element in the detailed description, while the same element is "second" or "third". It may be mentioned in the claims by another descriptor such as. In such cases, it should be understood that such descriptors are simply used to facilitate reference to multiple elements or components.Many computer hardware manufacturers are developing processing elements known as accelerators to accelerate the processing of workloads. For example, the accelerator can be a CPU, GPU, VPU, and / or FPGA. In addition, accelerators are designed to optimize certain types of workload while being able to handle all types of workload. For example, CPUs and FPGAs can be designed to handle more general processing, GPUs can be designed to improve video, game, and / or other physical and mathematical calculations, and VPUs are machines. It can be designed to improve the processing of vision tasks.Moreover, some accelerators are specifically designed to improve the processing of artificial intelligence (AI) applications. A VPU is a particular type of AI accelerator, while many different AI accelerators are available. In fact, many AI accelerators can be implemented by application specific integrated circuits (ASICs). Such ASIC-based AI accelerators include Machine Learning (ML), Deep Learning (DL), and / or Support Vector Machines (SVM), Neural Networks (NN). , Recurrent Neural Networks (RNN), Convolutional Neural Networks (CNN), Long Short Term Memory (LSTM), Gate Recurrent Units (GRU), etc. It can be designed to improve the handling of tasks related to certain types of AI, including other man-made machine-driven logics.Computer hardware manufacturers are also developing heterogeneous systems that contain more than one type of processing element. For example, computer hardware manufacturers may combine general purpose processing elements such as CPUs with either general purpose accelerators such as FPGAs and / or more customized accelerators such as GPUs, VPUs and / or other AI accelerators. is there. Such heterogeneous systems can be implemented as Systems on a Chip (SoC).When a developer wants to execute a function, algorithm, program, application, and / or other code in a heterogeneous system, the developer and / or software will at the time of compilation the function, algorithm, program, application, and / or Generate schedules (eg graphs) for other code. Once the schedule is generated, the schedule functions, algorithms, programs, applications, and / or to generate an executable file (for either the Ahead of Time or Just in Time paradigm). Combined with other code specifications. In addition, schedules combined with functions, algorithms, programs, applications, and / or other code may be represented as graphs containing nodes, which represent workloads and each node (eg, eg). , Workload node) represents a particular task for that workload. Furthermore, the connections between different nodes in the graph represent the data inputs and / or outputs required for a particular workload node to run, and the vertices of the graph are the data between the workload nodes in the graph. Represents a dependency.A common practice of compiling a schedule (eg, a graph) is to receive the schedule (eg, a graph) and put different workload nodes of the workload into different Compute Building Blocks (CBB) in the accelerator. ) Includes a graph compiler. In heterogeneous systems, graph compilers are individually configured to communicate with each independent CBB. For example, in order for the graph compiler to allocate and / or send workload nodes to the DSP and / or the kernel located at the DSP, such a graph compiler may include input and output conditions (eg, of the input) that the DSP contains. Type and output type) need to be known. In a heterogeneous system containing different Computational Construction Blocks (CBBs), or in a heterogeneous system that receives and / or otherwise acquires different workload nodes to be executed in different CBBs, a single graph compiler. Execution using is computationally intensive. Moreover, communication and control between CBBs during runtime is often impractical due to the heterogeneous nature of the system. Similarly, data exchange synchronization between CBBs is often computationally expensive.Moreover, the allocation of different workload nodes to different kernels located in a heterogeneous system also requires the graph compiler to be individually configured to communicate with each independent kernel. .. In addition, the kernel is often loaded into the accelerator after being generated by the user, as such requiring a reconfiguration of the graph compiler. For example, the graph compiler may not be able to communicate with the kernel generated and / or otherwise loaded in the accelerator (eg, send a workload node) after the graph compiler is initialized.Examples disclosed herein include methods and devices for setting heterogeneous components in accelerators. The examples disclosed herein include accelerators that can operate with any schedule and / or graph. For example, the examples disclosed herein include a graph compiler capable of efficiently understanding and mapping arbitrary schedules and / or graphs to accelerators. The behavior of such examples as disclosed in this application will be described in more detail below.The examples disclosed herein include various CBB abstractions and / or generalizations during compile time. The examples disclosed in this application include adopting a common identification for CBB. For example, each CBB, whether heterogeneous or not, can be identified by generating its own selector to interact with that CBB. In such an example, the selector is generated in response to parsing the workload nodes within the workload. Selectors can be made to interact with such CBBs, as each workload node often contains details about the type of CBB used to perform. In the examples disclosed herein, the selector determines such CBB input and / or output conditions. Selectors can be made to be individual entities that can communicate with the workload and the CBB within the workload (eg, with the workload domain and the CBB domain). As a result, the graph compiler includes plugins that enable it to work in the workload domain. As used herein, workload domain refers to a workload-based level of abstraction and / or general. Similarly, as used herein, the CBB domain refers to a level of abstraction and / or generalization that is based on the CBB and is more detailed than the workload domain. Examples as disclosed herein allow for an abstraction of the CBB, which is either system-specific or later included in the system.The examples disclosed in this application utilize buffers identified as input and output buffers. In such an example as disclosed herein, a producer (eg, a CBB that produces and / or writes data used by another CBB) or a consumer (eg, a data generated by another CBB) is acquired. A CBB pipeline that behaves as either and / or another CBB) is implemented using a buffer. By implementing a CBB pipeline that behaves as either a producer or a consumer, the graph compiler, when sizing and / or assigning workload nodes (eg, tasks) for workloads (eg, graphs) to each CBB, Generic heuristics (eg, techniques designed to solve problems, rules of thumb that work in a workload domain) can be used. In some of the examples disclosed herein, the graph compiler may provide information that can include the size and number of buffer slots (eg, storage size) to perform workload nodes (eg, tasks). .. In this way, the example credit manager may generate n credits based on n slots in the buffer. The n credits therefore represent the available n spaces in memory that the CBB can write or read. The credit generator packages n credits and sends them to the corresponding producers and / or consumers as determined by the configuration controller so that they are communicated over an exemplary fabric (eg, control and configuration fabric). , Supply to the configuration controller.Furthermore, the examples disclosed in this application include implementing a standard representation of CBB for a graph compiler. The examples disclosed in this application include selectors configured for each workload node in the workload. The selector is configured to identify the standard CBB input and / or output conditions identified by the corresponding workload node. In addition, such selectors are configured to supply the graph compiler with a list of abstracted devices identified by their input and / or output conditions. In a possible example disclosed herein, the graph compiler allows the mapping of workload nodes (eg, tasks) to various CBBs so that the workload nodes (eg, tasks) within the workload (eg, graph) can be mapped. ) And various CBBs, including plug-ins that can form a conversion layer (eg, a conversion layer between the CBB domain and the workload domain). In addition, in some of the examples disclosed herein, the selector may return the specific requirements of the relevant CBB to the graph compiler. For example, the selector may tell the graph compiler that such a CBB requires a certain percentage of memory allocation for it to work.During runtime, the examples disclosed herein include a common architecture used to configure the CBB to allow communication between the CBBs. The examples disclosed in this application utilize a credit system with a pipeline generated by a graph compiler. Such a system allows the graph compiler to map workload nodes (eg, tasks) from workloads (eg, graphs) to producer and consumer pipelines, allowing communication between CBBs. When the CBB, which acts as the initial producer (the CBB that executes the workload node instructing it to write data), completes the execution of the workload node, the credits are returned to the origin seen from that CBB instead of the next CBB. .. Such an origin may be a credit manager in the examples disclosed herein.FIG. 1 is a block diagram illustrating a computer system 100 in which a heterogeneous component is set in an accelerator. In the example of FIG. 1, the computer system 100 includes an example system memory 102 and an example heterogeneous system 104. The example heterogeneous system 104 includes an example host processor 106, an example first communication bus 108, an example first accelerator 110a, an example second accelerator 110b, and an example third. Includes accelerator 110c. The first accelerator 110a as an example, the second accelerator 110b as an example, and the third accelerator 110c as an example each include various CBBs that are general and / or specific to the operation of each accelerator.In the example of FIG. 1, the system memory 102 may be implemented by any device for storing data, such as a flash memory, a magnetic medium, an optical medium, and the like. Further, the data stored in the example system memory 102 is any data such as binary data, comma separated data, tab separated data, structured query language (SQL) structure, and the like. It may be in the format. The system memory 102 is coupled to the heterogeneous system 104. In FIG. 1, the system memory 102 is a shared storage between the host processor 106, the first accelerator 110a, the second accelerator 110b, and the third accelerator 110c. In the example of FIG. 1, the system memory 102 is the local physical storage of the computer system 100, but in another example, the system memory 102 may be outside the computer system 100 and / or the computer system 100. It may be remote to the other way. In a further example, the system memory 102 may be virtual storage. In the example of FIG. 1, the system memory 102 is a non-volatile memory (eg, Read Only Memory, ROM), Programmable ROM (PROM), Erasable PROM (EPROM), electrical. Electrically Erasable PROM (EEPROM), etc.). In another example, the system memory 102 may be a non-volatile basic input / output system (BIOS) or flash storage. In a further example, the system memory 102 may be a volatile memory.In FIG. 1, the heterogeneous system 104 is connected via the system memory 102. In the example of FIG. 1, the heterogeneous system 104 is loaded by performing the workload on one or more of the host processor 106 and / or the first accelerator 110a, the second accelerator 110b, or the third accelerator 110c. To process. In FIG. 1, the heterogeneous system 104 is a system on chip (SoC). Alternatively, the heterogeneous system 104 may be any other type of computer or hardware system.In the example of FIG. 1, the host processor 106 is instructed to perform an operation associated with the computer and / or computer device (eg, computer system 100) and / or otherwise assist in completing the operation (machine read). A processing element configured to execute a possible instruction). In the example of FIG. 1, the host processor 106 is the primary processing element for the heterogeneous system 104 and includes at least one core. Alternatively, the host processor 106 may be a co-primary processing element (eg, in an example where more than one CPU is utilized), while in other examples the host processor 106 is a secondary processing element. You can.In the example represented in FIG. 1, one or more of the first accelerator 110a, the second accelerator 110b, and / or the third accelerator 110c is a heterogeneous system 104 for computational tasks such as hardware acceleration. It is a processing element that can be used by the program executed in. For example, the first accelerator 110a is a processing element that includes processing resources that are designed and / or otherwise configured or structured to improve processing performance and overall performance for processing machine vision tasks for AI. (For example, VPU).In the examples disclosed herein, the host processor 106, the first accelerator 110a, the second accelerator 110b, and the third accelerator 110c each communicate with other elements of the computer system 100 and / or system memory 102. For example, the host processor 106, the first accelerator 110a, the second accelerator 110b, and the third accelerator 110c, and / or the system memory 102 communicate via the first communication bus 108. In some examples disclosed herein, the host processor 106, the first accelerator 110a, the second accelerator 110b, the third accelerator 110c, and / or the system memory 102 may be driven by any suitable wired and / or wireless communication method. Can communicate. Moreover, in some examples disclosed herein, the host processor 106, the first accelerator 110a, the second accelerator 110b, the third accelerator 110c, and / or the system memory 102 may be any suitable wired and / or wireless. Depending on the communication method, it may communicate with any component outside the computer system 100.In the example of FIG. 1, the first accelerator 110a is an example convolution engine 112, an example RNN engine 114, an example memory 116, an example memory management unit (MMU) 118, and an example. It includes a DSP 120 and an example controller 122. In the examples disclosed in the present application, any of the convolution engine 112, the RNN engine 114, the memory 116, the memory management unit (MMU) 118, the DSP 120, and / or the controller 122 may be referred to as a CBB. In some examples disclosed herein, the memory 116 and / or MMU118 may be referred to as the underlying element. For example, the memory 116 and / or the MMU 118 may be mounted outside the first accelerator 110a. An example convolution engine 112, an example RNN engine 114, an example memory 116, an example MMU 118, an example DSP 120, and an example controller 122 are examples of a first scheduler 124, an example, and an example. A second scheduler 126, an example third scheduler 128, an example fourth scheduler 130, an example fifth scheduler 132, and an example sixth scheduler 134, respectively. Each of the example DSP 120 and the example controller 122 further includes an example kernel library 136 and an example second kernel library 138.In the example shown in FIG. 1, the convolution engine 112 is implemented by a logic circuit such as a hardware processor. It should be noted that any other type of circuit configuration may be used further or as an alternative, eg, one or more analog or digital circuits, logic circuits, programmable processors, application specific integrated circuits (ASICs), and the like. There are programmable logic devices (s), PLDs, field programmable gate arrays (FPGAs), digital signal processors (s), DSPs, and the like. The convolution engine 112 is a device configured to improve the handling of convolution-related tasks. In addition, the convolution engine 112 improves the processing of tasks related to visual image analysis and / or other tasks related to CNN.In the example of FIG. 1, the RNN engine 114 is implemented by a logic circuit such as a hardware processor. It should be noted that any other type of circuit configuration may be used further or as an alternative, eg, one or more analog or digital circuits, logic circuits, programmable processors, application specific integrated circuits (ASICs), and the like. Programmable logic devices (PLDs), field programmable gate arrays (FPGAs), digital signal processors (DSPs), and the like. The RNN engine 114 is a device configured to improve the processing of RNN-related tasks. In addition, the RNN engine 114 improves the processing of tasks related to the analysis of unsegmented connected handwriting recognition and speech recognition and / or other tasks related to RNNs.In the example of FIG. 1, the memory 116 may be implemented by any device for storing data, such as a flash memory, a magnetic medium, an optical medium, and the like. Further, the data stored in the example memory 116 may be in any data format, such as binary data, comma separated data, tab separated data, structured query language (SQL) structure, and the like. Good. The memory 116 is shared storage between a convolution engine 112, an RNN engine 114, an MMU 118, a DSP 120, and a controller 122 that includes Direct Memory Access (DMA) functionality. Further, the memory 116 allows at least one of the convolution engine 112, the RNN engine 114, the MMU 118, the DSP 120, and the controller 122 to access the system memory 102 independent of the host processor 106. In the example of FIG. 1, the memory 116 is the local physical storage of the first accelerator 110a, but in another example, the memory 116 may be outside the first accelerator 110a and / or the first accelerator. It may be remote to 110a in a different way. In a further example, the memory 116 may be virtual storage. In the example of FIG. 1, the memory 116 is a non-volatile storage (eg, read-only memory (ROM), programmable ROM (PROM), erasable PROM (EPROM), electrically erasable PROM (EPROM), etc.). is there. In another example, the memory 116 may be a non-volatile basic input / output system (BIOS) or flash storage. In a further example, the memory 116 may be a volatile memory.In the example shown in FIG. 1, the example MMU118 is implemented by a logic circuit such as a hardware processor. It should be noted that any other type of circuit configuration may be used further or as an alternative, eg, one or more analog or digital circuits, logic circuits, programmable processors, application specific integrated circuits (ASICs), and the like. Programmable logic devices (PLDs), field programmable gate arrays (FPGAs), digital signal processors (DSPs), and the like. The MMU 118 includes references to all addresses in memory 116 and / or system memory 102. The MMU 118 further translates the virtual memory address used by one or more of the convolution engine 112, the RNN engine 114, the DSP 120, and / or the controller 122 into a physical address in the memory 116 and / or the system memory 102.In the example of FIG. 1, the DSP 120 is implemented by a logic circuit such as a hardware processor. It should be noted that any other type of circuit configuration may be used further or as an alternative, eg, one or more analog or digital circuits, logic circuits, programmable processors, application specific integrated circuits (ASICs), and the like. Programmable logic devices (PLDs), field programmable gate arrays (FPGAs), digital signal processors (DSPs), and the like. The DSP 120 is a device that improves the processing of digital signals. For example, the DSP 120 assists in the process of measuring, filtering, and / or compressing continuous real-world signals such as data from cameras and / or other sensors for computer vision. More generally, the DSP 120 is any work from a workload that is not served by other fixed function CBBs (eg, RNN engine 114, CNN engine, etc.) via an example kernel in the first kernel library 136. Load nodes are also used to implement. Furthermore, if the workload includes 100 workload nodes written based on a first language (eg, TensorFlow, CAFFE, ONNX, etc.), the first accelerator 110a, the second accelerator 110b, and / or The third accelerator 110c executes 20 of the 100 workload nodes as a fixed function (for example, using an RNN engine 114, a CNN engine, etc.), and then 100. The remaining 80 workload nodes of the workload nodes can be executed using each kernel in the first kernel library 136. In this way, any element based on the same language (eg, TensorFlow, CAFFE, ONNX, etc.) can be mapped to the first accelerator 110a, the second accelerator 110b, and / or the third accelerator 110c.In FIG. 1, the controller 122 is implemented by a logic circuit such as a hardware processor. It should be noted that any other type of circuit configuration may be used further or as an alternative, eg, one or more analog or digital circuits, logic circuits, programmable processors, application specific integrated circuits (ASICs), and the like. Programmable logic devices (PLDs), field programmable gate arrays (FPGAs), digital signal processors (DSPs), and the like. The controller 122 is implemented as a control unit of the first accelerator 110a. For example, the controller 122 instructs the operation of the first accelerator 110a. In some examples, controller 122 implements a credit manager. Furthermore, the controller 122 is one or more of the convolution engine 112, the RNN engine 114, the memory 116, the MMU 118, and / or the DSP 120 how it responds to machine readable instructions received from the host processor 106. Can be instructed to.In the example of FIG. 1, the first scheduler 124, the second scheduler 126, the third scheduler 128, the fourth scheduler 130, the fifth scheduler 132, and the sixth scheduler 134 are each a convolution engine 112, an RNN engine 114, and a memory 116. , MMU118, DSP120, and controller 122, respectively, are devices that determine when to perform a portion of the workload that is offloaded and / or otherwise transmitted to the first accelerator 110a. Moreover, each of the first kernel library 136 and the second kernel library 138 is a data structure containing one or more kernels. The kernels of the first kernel library 136 and the second kernel library 138 are routines compiled, for example, for high throughput at DSP 120 and controller 122, respectively. The kernel corresponds, for example, to an executable subsection of an executable file that should be executed on computer system 100.In the examples disclosed herein, the convolution engine 112, the RNN engine 114, the memory 116, the MMU 118, the DSP 120, and the controller 122 each communicate with other elements of the first accelerator 110a. For example, the convolution engine 112, the RNN engine 114, the memory 116, the MMU 118, the DSP 120, and the controller 122 communicate via the second communication bus 140, which is an example. In some examples, the second communication bus 140 may be implemented by one or more computer fabrics (eg, configuration and control fabrics, data fabrics, etc.). In some examples disclosed herein, the convolution engine 112, RNN engine 114, memory 116, MMU118, DSP120, and controller 122 may communicate via any suitable wired and / or wireless communication method. Moreover, in some of the examples disclosed herein, the convolution engine 112, the RNN engine 114, the memory 116, the MMU 118, the DSP 120, and the controller 122, respectively, are via any suitable wired and / or wireless communication method. It can communicate with any component outside the first accelerator 110a.As mentioned above, the first accelerator 110a as an example, the second accelerator 110b as an example, and / or the third accelerator 110c as an example are all various general and / or specific to the operation of each accelerator. CBB may be included. For example, the first accelerator 110a, the second accelerator 110b, and the third accelerator 110c each include a general CBB such as a memory, an MMU, a controller, and their respective schedulers for each of the CBBs. In addition, or alternative, an external CBB not located in any of the first accelerator 110a, the second accelerator 110b as an example, and / or the third accelerator 110c as an example is included and / or added. May be good. For example, the user of the computer system 100 may use any one of the first accelerator 110a, the second accelerator 110b, and / or the third accelerator 110c to operate the external RNN engine.In the example of FIG. 1, the first accelerator 110a mounts a VPU and includes a convolution engine 112, an RNN engine 114, and a DSP 120 (eg, a CBB specific to the operation specific to the operation of the first accelerator 110a), while the first. The two accelerators 110b and the third accelerator 110c may include additional or alternative CBBs specific to the operation of the second accelerator 110b and / or the third accelerator 110c. For example, when the second accelerator 110b implements the GPU, the CBB specific to the operation of the second accelerator 110b is a thread dispatcher, a graphics technology interface, and / or the processing speed and overall of computer graphics processing and / or image processing. Any other CBB desired to improve performance can be included. Further, when the third accelerator 110c implements an FPGA, the CBB specific to the operation of the third accelerator 110c is one or more Arithmetic Logic Units (ALUs), and / or general calculations. Any other CBB desired to improve the processing speed and overall performance of the processing can be included.The heterogeneous system 104 of FIG. 1 includes a host processor 106, a first accelerator 110a, a second accelerator 110b, and a third accelerator 110c, while in some examples the heterogeneous system 104 is an application-specific instruction set. Processors (Application Specific Instruction set Processors, SIP), Physics Processing Units (PPUs), Designated DSPs (DSPs), Image Processors, Coprocessors, Floating Point Processors, Network Processors, Multi-Core Processors, and Front-End Processors It may include any number of processing elements including (eg, host processor and / or accelerator).FIG. 2 is a block diagram illustrating a computer system 200 including an example graph compiler 202 and one or more example selectors 204. In the example of FIG. 2, the computer system 200 further includes an example workload 206 and an example accelerator 208. Further, in FIG. 2, the accelerator 208 includes an example credit manager 210, an example Control and Configuration (CnC) fabric 212, an example convolution engine 214, an example MMU 216, and an example. Includes an RNN engine 218, an example DSP 220, an example memory 222, and an example configuration controller 224. In the example of FIG. 2, memory 222 includes an exemplary DMA unit 226 and one or more example buffers 228. In other examples disclosed herein, any suitable CBB may be included and / or added to accelerator 208.In the example shown in FIG. 2, the example graph compiler 202 is a means for compiling or a means for compiling. In the example shown in FIG. 2, the example selector among the one or more selectors is a means for selecting or a means for selecting. In the example shown in FIG. 2, the example credit manager 210 is a means for managing credits or a means for managing credits. In the example shown in FIG. 2, the configuration controller 224 as an example is a means for controlling or a means for controlling. In the example of FIG. 2, any of the convolution engine 214, MMU216, RNN engine 218, DSP220, memory 222, and / or kernel in kernel bank 232 may be a means of calculation or a means of calculation.In the example shown in FIG. 2, the graph compiler 202 is implemented by a logic circuit such as a hardware processor. It should be noted that any other type of circuit configuration may be used further or as an alternative, eg, one or more analog or digital circuits, logic circuits, programmable processors, application specific integrated circuits (ASICs), and the like. Programmable logic devices (PLDs), field programmable gate arrays (FPGAs), digital signal processors (DSPs), and the like. In FIG. 2, the graph compiler 202 is coupled to one or more selectors 204 and accelerators 208. During operation, the graph compiler 202 receives the workload 206 and compiles the workload 206 into an example executable file 230 executed by the accelerator 208. For example, the graph compiler 202 receives the workload 206 and connects various workload nodes of the workload 206 (eg, graph) to various CBBs of the accelerator 208 (eg, convolution engine 214, MMU216, RNN engine 218, DSP220, and / Or assign to any of the DMA units 226). The graph compiler 202 further generates an example selector in one or more selectors 204 corresponding to each workload node in the workload 206. Moreover, the graph compiler 202 allocates memory for one or more buffers 228 in memory 222 of accelerator 208. In the examples disclosed herein, the executable file 230 is generated on a separate system (eg, a compilation system and / or a compilation processor) and is on a different system (eg, a deployment system, a runtime system, a deployment processor) for later use. , Etc.) may be memorized. For example, the graph compiler 202 and one or more selectors 204 may be located in a system separate from the accelerator 208.In the example shown in FIG. 2, one or more selectors 204 are implemented by a logic circuit such as a hardware processor. It should be noted that any other type of circuit configuration may be used further or as an alternative, eg, one or more analog or digital circuits, logic circuits, programmable processors, application specific integrated circuits (ASICs), and the like. Programmable logic devices (PLDs), field programmable gate arrays (FPGAs), digital signal processors (DSPs), and the like. One or more selectors 204 are coupled to an example kernel bank 232 located within the graph compiler 202, accelerator 208, and DSP 220. One or more selectors 204 are coupled to the graph compiler 202 to acquire workload 206. Each workload node (eg, task) in workload 206 is a CBB (eg, convolution engine 214, MMU216, RNN engine 218, DSP220, and / or DMA unit 226) used to perform the associated workload. Any of) is shown. In the examples disclosed herein, the selectors in one or more selectors 204 are generated for each workload node and correspond to the CBB (eg, convolution engine 214, MMU216, RNN engine 218, DSP220, and / or DMA. (Any of the units 226) and / or associated with the kernel in kernel bank 232. One or more selectors 204 are generated by the graph compiler 202 in response to workload 206, such as various CBBs (eg, convolution engine 214, MMU216, RNN engine 218, DSP220, and / or DMA. Each input and / or output condition of the kernel in any of units 226) and / or kernel bank 232 can be identified. Such identification by one or more selectors can be expressed as abstracted knowledge for use by the graph compiler 202. Such abstracted knowledge allows the graph compiler 202 to operate independently of the heterogeneous nature of accelerator 208.Moreover, the graph compiler 202 attaches each workload node from the workload 206 to the corresponding CBB (eg, one of the convolution engine 214, MMU216, RNN engine 218, DSP220, and / or DMA unit 226) and / or. One or more selectors 204 are used to map to the kernel in kernel bank 232. In addition, the graph compiler 202 addresses specific behaviors and parameters with the appropriate amount of credit for the corresponding workload node and adjacent workload nodes (eg, the consumer and / or producer resulting from the workload node). One or more selectors 204 are used to set the CBB (eg, one of the convolution engine 214, MMU216, RNN engine 218, DSP220, and / or DMA unit 226). In some examples disclosed herein, one or more selectors 204 have corresponding CBBs (eg, convolution engines 214, MMU216, RNN engines 218, DSP220, and /) for each workload node from workload 206. Alternatively, it may be mapped to any of the DMA units 226) and / or the kernel in kernel bank 232.In the examples disclosed herein, one or more selectors 204 may be included in the graph compiler 202. In such an example as disclosed herein, additional selectors may be included in one or more selectors 204, or, instead, the current selector in one or more selectors 204 is the workload. It may be modified in response to changes in 206 and / or accelerator 208 (eg, new workload 206 is supplied, additional CBB is added to accelerator 208, etc.).In some examples, the graph compiler 202 identifies a workload node from workload 206 that indicates that the data should be scaled. A workload node that indicates that the data should be scaled is sent to one or more selectors 204 associated with such a task. One or more selectors 204 associated with the identified workload node are CBBs (eg, convolution engine 214, MMU216, RNN engine 218, DSP220, and / or DMA unit for the graph compiler 202 to perform the workload. Any of 226) and / or the kernel in kernel bank 232 can be identified, along with the identified input and / or output conditions of such identified CBB and / or kernel in kernel bank 232.In the example of FIG. 2, the workload 206 is, for example, a graph, function, algorithm, program, application, and / or other code executed by the accelerator. In some examples, workload 206 is a description of graphs, functions, algorithms, programs, applications, and / or other code. The workload 206 may be any arbitrary graph and / or any suitable input obtained from the user. For example, the workload 206 may be a workload associated with AI processing such as deep learning topology and / or computer vision. During operation, each workload node in workload 206 (eg, graph) has a specific CBB (eg, either convolution engine 214, MMU216, RNN engine 218, DSP220, and / or DMA unit 226), kernel bank. Includes constraints that specify input and / or output conditions for performing tasks within the kernel and / or workload nodes within 232. Thus, the example plug-in 236 included in the graph compiler 202 allows mapping between the workload node of workload 206 (eg, graph) and the associated CBB and / or kernel in kernel bank 232. .. The plug-in 236 has abstracted knowledge (eg, in each CBB and / or kernel bank 232) acquired by one or more selectors 204 to allocate the workload in the workload 206 (eg, graph). Interacts with each standard input and / or output definition of the kernel). In such an example as disclosed herein, the plug-in 236 is an abstracted knowledge acquired by one or more selectors 204 (eg, each standard of the kernel in each CBB and / or kernel bank 232). Work on various CBBs (eg, any of the convolution engine 214, MMU216, RNN engine 218, DSP220, and / or DMA unit 226) and / or the kernel in kernel bank 232 based on the input and / or output definition). Workload nodes in workload 206 (eg, graphs) and various CBBs (eg, convolution engine 214, MMU216, RNN engine 218, DSP220, and / or It may form a conversion layer between (any of the DMA units 226) and / or the kernel in kernel bank 232.In the example of FIG. 2, the accelerator 208 is coupled to the graph compiler 202 and to one or more selectors 204. In some of the examples disclosed herein, during the compilation time, the graph compiler 202 acts on the compilation system (eg, the first processor) and utilizes one or more selectors 204 to perform the compilation process. (For example, an executable file 230 may be generated). As a result, the graph compiler 202 produces an example executable file 230 in the compilation system (eg, first processor). Further, or alternative, the executable file 230 may be stored in the database for later use. For example, the executable file 230 may be stored and executed in a compilation system (eg, first processor) and / or any external and / or internal system (eg, deployment system, second processor, etc.). During runtime, the executable file 230 is operational in a deployment system (eg, system 100 in FIG. 1, second processor, etc.). The compilation system (eg, first processor) may be able to operate elsewhere from the deployment system (eg, system 100, second processor, etc. in FIG. 1). Alternatively, the compilation system and / or the deployment system may be combined and, as such, any workload (eg, executable) to an executable (eg, executable 230) that is being executed directly by the accelerator. , Workload 206) may enable just-in-time (JIT) compilation.In the example shown in FIG. 2, the credit manager 210 is attached to the CnC fabric 212. The credit manager 210 is implemented by a logic circuit such as a hardware processor. It should be noted that any other type of circuit configuration may be used further or as an alternative, eg, one or more analog or digital circuits, logic circuits, programmable processors, application specific integrated circuits (ASICs), and the like. Programmable logic devices (PLDs), field programmable gate arrays (FPGAs), digital signal processors (DSPs), and the like. The credit manager 210 is a device that manages credits associated with one or more of the convolution engine 214, MMU216, RNN engine 218, and / or DSP220. In some examples, the credit manager 210 can be implemented by the controller as a credit manager controller. The credit represents the amount of data and / or space available in memory 222 for the output of the workload node that is available in memory 222. In another example, the credit and / or credit value may indicate the number of slots in a buffer (eg, one of buffers 228) available for storing and / or writing data otherwise.The credit manager 210 and / or the configuration controller 224 associates memory 222 with each workload node of a given workload based on the executable file 230 received from the graph compiler 202 and distributed by the configuration controller 224. It can be partitioned into one or more buffers (eg, buffer 228). As such, credits may represent slots in associated buffers (eg, buffer 228) that are available for storing and / or writing data otherwise. For example, the credit manager 210 receives information corresponding to the workload 206 (eg, configuration and control messages 234 and / or otherwise configuration and control messages). For example, the credit manager 210 receives information determined by the configuration controller 224 from the configuration controller 224 via the CnC fabric 212, indicating the CBB initialized as a producer and the CBB initialized as a consumer.In the examples disclosed herein, in response to an instruction received from the configuration controller 224 instructing the execution of a particular workload node (eg, the configuration controller 224 sends a configuration and control message 234). The credit manager 210 then supplies and / or otherwise transmits the corresponding credits to the CBB acting as the initial producer (eg, to the convolution engine 214 to write data to the three slots of the buffer 3). Supply one credit). When the CBB acting as the initial producer completes the workload node, the credits are returned to the origin seen by the CBB (eg, credit manager 210). The credit manager 210 supplies and / or otherwise sends credits to the CBB acting as a consumer in response to obtaining credits from the producer (eg, DSP220 reads data from three slots in the buffer). Get 3 slots). Such an order of producer and consumer is determined using executable file 230. In this way, the CBB sends an instruction of ability to operate via the credit manager 210, regardless of heterogeneous nature. The producer CBB produces the data utilized by the other CBB, while the consumer CBB consumes and / or otherwise processes the data produced by the other CBB.In some of the examples disclosed herein, the credit manager 210 may be configured to determine if the workload node has completed execution. In such an example, the credit manager 210 may clear all credits in the CBB associated with the workload node.In the example of FIG. 2, the CNC fabric 212 is coupled to the credit manager 210, the convolution engine 214, the MMU 216, the RNN engine 218, the DSP 220, the memory 222, and the configuration controller 224. In some of the examples disclosed herein, the memory 222 and / or MMU216 is referred to as the base element and does not have to be coupled to the CNC fabric 212. The CnC fabric 212 has one or more of the credit manager 210, convolution engine 214, MMU 216, RNN engine 218, and / or DSP 220 credit manager 210, convolution engine 214, MMU 216, RNN engine 218, DSP 220, memory 222, and / or DSP 220. / Or a control fabric that includes wiring and a network of at least one logic circuit that allows credits to be sent and received to and from one or more of the configuration controllers 224. Moreover, the CNC fabric 212 is configured to send exemplary configuration and control messages 234 to and / or from one or more selectors 204. In other examples disclosed herein, any suitable computing fabric may be used to implement the CnC fabric 212 (eg, Advanced eXtensible Interface (AXI)).In the example shown in FIG. 2, the convolution engine 214 is implemented by a logic circuit such as a hardware processor. It should be noted that any other type of circuit configuration may be used further or as an alternative, eg, one or more analog or digital circuits, logic circuits, programmable processors, application specific integrated circuits (ASICs), and the like. Programmable logic devices (PLDs), field programmable gate arrays (FPGAs), digital signal processors (DSPs), and the like. The convolution engine 214 is coupled to the CNC fabric 212. The convolution engine 214 is a device configured to improve the handling of convolution-related tasks. In addition, the convolution engine 112 improves the processing of tasks related to visual image analysis and / or other tasks related to CNN.In the example shown in FIG. 2, the example MMU216 is implemented by a logic circuit such as a hardware processor. It should be noted that any other type of circuit configuration may be used further or as an alternative, eg, one or more analog or digital circuits, logic circuits, programmable processors, application specific integrated circuits (ASICs), and the like. Programmable logic devices (PLDs), field programmable gate arrays (FPGAs), digital signal processors (DSPs), and the like. The MMU 216 is attached to the CNC fabric 212. The MMU 216 is a device that allows remote memory address translation to memory 222 and / or accelerator 208. The MMU 216 further provides a virtual memory address used by one or more of the credit manager 210, the convolution engine 214, the RNN engine 218, and / or the DSP 220 in memory remote to memory 222 and / or accelerator 208. Convert to the physical address of.In FIG. 2, the RNN engine 218 is implemented by a logic circuit such as a hardware processor. It should be noted that any other type of circuit configuration may be used further or as an alternative, eg, one or more analog or digital circuits, logic circuits, programmable processors, application specific integrated circuits (ASICs), and the like. Programmable logic devices (PLDs), field programmable gate arrays (FPGAs), digital signal processors (DSPs), and the like. The RNN engine 218 is coupled to the CNC fabric 212. The RNN engine 218 is a device configured to improve the processing of RNN-related tasks. In addition, the RNN engine 218 improves the processing of tasks related to the analysis of unsegmented connected handwriting recognition and speech recognition and / or other tasks related to RNNs.In the example of FIG. 2, the DSP 220 is implemented by a logic circuit such as a hardware processor. It should be noted that any other type of circuit configuration may be used further or as an alternative, eg, one or more analog or digital circuits, logic circuits, programmable processors, application specific integrated circuits (ASICs), and the like. Programmable logic devices (PLDs), field programmable gate arrays (FPGAs), digital signal processors (DSPs), and the like. The DSP 220 is attached to the CNC fabric 212. The DSP 220 is a device that improves the processing of digital signals. For example, the DSP 220 assists in measuring, filtering, and / or compressing continuous real-world signals such as data from cameras and / or other sensors for computer vision.In the example of FIG. 2, the memory 222 may be implemented by any device for storing data, such as a flash memory, a magnetic medium, an optical medium, and the like. Further, the data stored in the example memory 222 may be in any data format, such as binary data, comma separated data, tab separated data, structured query language (SQL) structure, and the like. Good. The memory 222 is coupled to the CNC fabric 212. The memory 222 is a shared storage between at least one of the credit manager 210, the convolution engine 214, the MMU 216, the RNN engine 218, the DSP 220, and / or the configuration controller 224. The memory 222 includes a DMA unit 226. Further, the memory 222 may be partitioned into one or more buffers 228 associated with one or more workload nodes of the workload associated with the executable file received by the configuration controller 224 and / or the credit manager 210. Further, the DMA unit 226 of the memory 222 operates in response to a command supplied by the configuration controller 224 via the CnC fabric 212. In some examples disclosed herein, the DMA unit 226 of memory 222 has at least one of a credit manager 210, a convolution engine 214, an MMU 216, an RNN engine 218, a DSP 220, and / or a configuration controller 224, respectively. Allows access to remote memory of accelerator 208 independent of the processor (eg, host processor 106). In the example of FIG. 2, memory 222 is the local physical storage of accelerator 208. Further, or alternative, in another example, the memory 222 may be outside the accelerator 208 and / or remotely remote to the accelerator 208. In a further example disclosed herein, memory 222 may be virtual storage. In the example of FIG. 2, the memory 222 is a non-volatile storage (for example, ROM, PROM, EPROM, EEPROM, etc.). In another example, the memory 222 may be a non-volatile BIOS or flash storage. In a further example, the memory 222 may be a volatile memory.In the example disclosed in the present application, the configuration controller 224 is implemented by a logic circuit such as a hardware processor. It should be noted that any other type of circuit configuration may be used further or as an alternative, eg, one or more analog or digital circuits, logic circuits, programmable processors, application specific integrated circuits (ASICs), and the like. Programmable logic devices (PLDs), field programmable gate arrays (FPGAs), digital signal processors (DSPs), and the like. The configuration controller 224 is implemented as a control unit of the accelerator 208. In the example disclosed herein, one or more selectors 204 send a configuration and control message 234 to the graph compiler 202 to generate an executable file 230. In some examples disclosed herein, the configuration controller 224 is a configuration and control message (eg, acquired and / or transmitted by one or more selectors 204) indicating a workload node contained in executable file 230. Executable file 230 may be acquired and parsed to identify the configured and control message 234). As such, the configuration controller 224 has set and controlled messages (eg, acquired by and / or sent to one or more selectors 204) to various CBBs to perform the tasks of executable file 230. Supply the setting and control message 234). In such an example as disclosed herein, the configuration and control message 234 is embedded in the executable file 230, supplied as such to the configuration controller 224, and located in various CBBs and / or kernel banks 232. Is sent to the kernel. For example, the configuration controller 224 parses the executable file 230 to identify the workload node in the executable file and can read the executable file 230 and / or other machines received from the graph compiler 202 via the credit manager 210. Instruct one or more of the convolution engine 214, MMU216, RNN engine 218, DSP220, kernel in kernel bank 232, and / or memory 222 how to respond to such instructions.In the example disclosed herein, the configuration controller 224 transmits a workload node (eg, in configuration and control format) from the acquired executable file 230 to the identified corresponding CBB. Similarly, the configuration controller 224 may send a workload node (eg, in the configuration and control format) to the credit manager 210 to initiate credit distribution.In the example of FIG. 2, the convolution engine 214, MMU216, RNN engine 218, and / or DSP220 may include the respective schedulers 238, 240, 242, and 244, respectively. During operation, schedulers 238, 240, 242, and 244 are assigned to the convolution engine 214, MMU216, RNN engine 218, and / or DSP 220 by the configuration controller 224 of accelerator 208, credit manager 210, and / or additional CBB, respectively. A part of the workload 206 (for example, a workload node) is executed respectively. Depending on the task and / or other behavior of a given workload node, the workload node can be a producer and / or a consumer.In the example of FIG. 2, each of the schedulers 238, 240, 242, and 244 puts data (eg, a producer) into a buffer (eg, at least one of buffers 228) in response to instructions provided by the credit manager 210. Receives and / or otherwise loads the credit value associated with the workload node shown in the corresponding CBB (eg, one of the convolution engine 214, MMU216, RNN engine 218, and / or DSP220) to write. It's okay. For example, if the executable file 230 acts as a producer and instructs the RNN engine 218 to write three bits of data to a buffer (eg, one of the buffers 228), then the scheduler 242 will call 3 One credit value can be loaded into the RNN engine 218. Moreover, in such an example, the executable file 230 may indicate that the MMU 216 should read the three bits previously written by the RNN engine 218. As such, the scheduler 242 (or RNN engine 218), when used, sends three credits to the MMU 216 via the CNC fabric 212 and the credit manager 210.During operation, the scheduler 238, 240, 242, 244, and / or the CBB (eg, any of the convolution engines 214, MMU216, RNN engine 218, and / or DSP220) are incrementally and / or any suitable method. You may send credits at. In another example, the first CBB may have a first credit value supplied to run the first workload node. In such an example, in response to executing the first workload node, the first CBB writes data to a first buffer (eg, one of buffers 228) in memory 222 and a second credit value. Send to credit manager 210. The second credit value represents the amount of first credit value used to write the data to the first buffer (eg, one of the buffers 228). For example, if the first credit value is 3 and the first CBB writes to two slots in the buffer (eg, one of the buffers 228), then the first CBB sends the two credits to the credit manager 210. In response, the credit manager 210 sends a second credit value (eg, two credits) to the second CBB, which uses the second credit value (eg, two credits) to buffer (eg, two credits). For example, the data in the two slots of the buffer 228) is read. As such, the second CBB can then execute the second workload node. In the examples disclosed herein, buffer 228 is implemented by a circular buffer containing any suitable number of data slots used in reading and / or writing data.In the example represented in FIG. 2, kernel bank 232 is a data structure containing one or more kernels. The kernel in kernel bank 232 is, for example, a routine compiled for high throughput at DSP 220. In another example disclosed herein, each CBB (eg, any of the convolution engines 214, MMU216, RNN engine 218, and / or DSP220) may include its own kernel bank. The kernel corresponds, for example, to an executable subsection of an executable file that should be executed on accelerator 208. In the example of FIG. 2, the accelerator 208 implements a VPU and includes a credit manager 210, a CNC fabric 212, a convolution engine 214, an MMU 216, an RNN engine 218, a DSP 220, and a memory 222, and a configuration controller 224, while , Accelerator 208 may include an additional or alternative CBB to that shown in FIG. In a further and / or alternative example disclosed herein, kernel bank 232 is coupled to one or more selectors 204 to be abstracted for use by the graph compiler 202.In the example of FIG. 2, the data fabric 233 is coupled to the credit manager 210, the convolution engine 214, the MMU 216, the RNN engine 218, the DSP 220, the memory 222, the configuration controller 224, and the CNC fabric 212. The data fabric 233 is a wiring and at least one that allows one or more of the credit manager 210, the convolution engine 214, the MMU 216, the RNN engine 218, the DSP 220, the memory 222, and / or the configuration controller 224 to exchange data. It is a network of one logic circuit. For example, in the data fabric 233, the producer CBB writes a tile of data into a buffer of memory located in one or more of the memory, eg, memory 222 and / or convolution engine 214, MMU216, RNN engine 218, and DSP220. To enable. Moreover, the data fabric 233 tiles data from a buffer of memory in which the consumer CBB resides in one or more of the memory, eg, memory 222 and / or convolution engine 214, MMU216, RNN engine 218, and DSP220. Allows to be read. The data fabric 233 transfers data to and from memory according to the information provided in the data package. For example, the data can be transferred by the methods of the package, and the packet contains headers, payloads, and trailers. The header of the packet is the destination address of the data, the source address of the data, the type of protocol in which the data is transmitted, and the packet number. The payload is the data generated or consumed by the CBB. The data fabric 233 can help exchange data between CBBs based on the header of the packet by parsing the intended destination address. In some examples disclosed herein, the data fabric 233 and the CnC fabric 212 may be implemented using a single and / or multiple computing fabrics.FIG. 3 is a block diagram showing an example of the selector 300 as an example in the one or more selectors 204 of FIG. The selector 300 represents an example of a selector generated by the graph compiler 202 of FIG. 2 for a particular workload node. In such an example, the selector 300 is generated to communicate with the particular CBB of FIG. 2 (eg, either the convolution engine 214, MMU216, RNN engine 218, and / or DSP220) and / or the kernel in kernel bank 232. Can be done. The selector 300 may be implemented for each workload node in the workload 206 of FIG. Moreover, individual selectors may be implemented for each individual workload node within workload 206. The selector 300 shown in FIG. 3 includes an example CBB analyzer 302, an example kernel analyzer 304, and an example compiler interface 306. During operation, any of the CBB analyzer 302, kernel analyzer 304, and / or compiler interface 306 may communicate via the exemplary communication bus 308. In FIG. 3, the communication bus 308 may be implemented by any suitable communication method and / or device (eg, Bluetooth® communication, LAN communication, WLAN communication, etc.). In some of the examples disclosed herein, the selector 300 represents an example selector among one or more selectors 204 and may be included in the graph compiler 202 of FIG.In the example shown in FIG. 3, the CBB analyzer 302 is a means for analyzing a calculation element or a means for analyzing a calculation element. In the example of FIG. 3, the kernel analyzer 304 is a means for analyzing the kernel or a means for analyzing the kernel. In the example of FIG. 3, the compiler interface 306 is a means for compiler communication or a compiler communication means.In the example shown in FIG. 3, the CBB analyzer 302 is implemented by a logic circuit such as a hardware processor. It should be noted that any other type of circuit configuration may be used further or as an alternative, eg, one or more analog or digital circuits, logic circuits, programmable processors, application specific integrated circuits (ASICs), and the like. Programmable logic devices (PLDs), field programmable gate arrays (FPGAs), digital signal processors (DSPs), and the like. During operation, the CBB analyzer 302 is configured to identify input and output conditions of the CBB (eg, one of the convolution engine 214, MMU216, RNN engine 218, and / or DSP220) associated with the workload. The CBB analyzer 302 of FIG. 3 addresses standard input requirements (eg, data structure, number of inputs, etc.) and identifies the type of input condition associated with the CBB identified to run the workload node. Is configured. Moreover, the CBB analyzer 302 should correspond to standard results (eg, number of outputs, type of result, etc.) and identify the type of output condition associated with the CBB identified to perform the workload. It is composed. In this way, the identified input and output conditions are identified by the CBB analyzer 302 and supplied in standard format for use by the graph compiler 202.In another example disclosed herein, the CBB analyzer 302 may communicate with the associated CBB to identify operating requirements. For example, if the CCB requires a certain percentage of memory allocation to run the example workload node, such requirements are determined by the CBB analyzer 302 and the graph compiler via compiler interface 306. Can be sent to 202.In some examples disclosed herein, the CBB analyzer 302 indirectly communicates with the associated CBB by utilizing internal knowledge and / or current and / or previous modeling of the associated CBB. Example internal knowledge and / or current and / or previous modeling may include knowledge of CBB operating requirements. In addition, the CBB analyzer 302 may perform node analysis on the associated workload to identify the node type. Such an example analysis may be performed using a node analyzer located at selector 300. Further, in such an example, the identified node type may be communicated, supplied and / or otherwise utilized by the graph compiler 202. In this way, the selector 300 gains knowledge about the corresponding CBB and / or the plurality of CBBs that can be targets that map the corresponding workload nodes. For example, there may be a workload node indicating that the multiplication is to be performed. As such, the graph compiler 202 of FIG. 2 calls and / or otherwise calls the selector 300, which has knowledge of multiplication (eg, based on parsing the identified node type). It can communicate and supply the relevant parameters of the workload node to the selector 300. The CBB analyzer 302 of the selector 300 identifies the CBB that executes the workload node used in the mapping. In some examples disclosed herein, the CBB analyzer 302 may map the corresponding workload node to the corresponding CBB.In FIG. 3, the example kernel analyzer 304 is implemented by a logic circuit such as a hardware processor. It should be noted that any other type of circuit configuration may be used further or as an alternative, eg, one or more analog or digital circuits, logic circuits, programmable processors, application specific integrated circuits (ASICs), and the like. Programmable logic devices (PLDs), field programmable gate arrays (FPGAs), digital signal processors (DSPs), and the like. During operation, the kernel analyzer 304 is configured to identify input and output conditions for the kernel (eg, the kernel included in kernel bank 232 of FIG. 2). For example, the kernel analyzer 304 may address standard input requirements (eg, data structure, number of inputs, etc.) and identify the type of input condition associated with the kernel identified to run the workload node. It is composed. Moreover, the kernel analyzer 304 should correspond to standard results (eg, number of outputs, type of result, etc.) and identify the type of output condition associated with the kernel identified to perform the workload. It is composed. In this way, the identified input and output conditions are provided in standard format for use by the graph compiler 202. In the examples disclosed herein, the kernel analyzer 304 may identify the type of input and / or output condition of any kernel included in the accelerator 208 (eg, a new kernel downloaded to the accelerator, etc.).In another example disclosed herein, the kernel analyzer 304 may communicate with the associated kernel to identify operating requirements. For example, if the kernel requires a certain percentage of memory allocation to run the example workload node, such requirements are determined by the kernel analyzer 304 and the graph compiler via compiler interface 306. Can be sent to 202.In some examples disclosed herein, the kernel analyzer 304 communicates indirectly with the associated kernel by utilizing internal knowledge and / or current and / or previous modeling of the associated kernel. Example internal knowledge and / or current and / or previous modeling may include knowledge of kernel operating requirements. In addition, the kernel analyzer 304 may perform node analysis on the associated workload to identify the node type. Such an example analysis may be performed using a node analyzer located at selector 300. Further, in such an example, the identified node type may be communicated, supplied and / or otherwise utilized by the graph compiler 202. For example, there may be a workload node indicating that the multiplication is to be performed. As such, the graph compiler 202 of FIG. 2 calls and / or otherwise communicates and works with the selector 300, which has knowledge of multiplication (eg, based on the identified node type). The relevant parameters of the load node may be supplied to the selector 300. The kernel analyzer 304 of the selector 300 identifies the CBB that executes the workload node used in the mapping. In some examples disclosed herein, the kernel analyzer 304 may map the corresponding workload node to the corresponding kernel.In the examples disclosed herein, either the CBB analyzer 302 and / or the kernel analyzer 304 may convey the identified constraints and / or requirements to the graph compiler 202 via compiler interface 306.In the example shown in FIG. 3, the compiler interface 306 is implemented by a logic circuit such as a hardware processor. It should be noted that any other type of circuit configuration may be used further or as an alternative, eg, one or more analog or digital circuits, logic circuits, programmable processors, application specific integrated circuits (ASICs), and the like. Programmable logic devices (PLDs), field programmable gate arrays (FPGAs), digital signal processors (DSPs), and the like. In some of the examples disclosed herein, the compiler interface 306 may be implemented using a software application programming interface (API) that can be implemented in a hardware circuit configuration. The compiler interface 306 of such an example enables communication between the selector 300 and the graph compiler 202 of FIG. Moreover, the compiler interface 306 includes an Ethernet® interface, a Universal Serial Bus (USB), a Bluetooth® interface, a Near Field Communication (NFC) interface, and / or PCI Express. It may be implemented by any type of interface standard, such as an interface. The compiler interface 306 is configured to acquire input and output conditions from either the CBB analyzer 302 and / or the kernel analyzer 304 and send the input and output conditions to the graph compiler 202. Further, or alternative, the compiler interface 306 may be configured to send the requirements determined by the CBB analyzer 302 and / or the kernel analyzer 304 to the graph compiler 202.FIG. 4 is an example of a block diagram showing the graph compiler 202 of FIG. As shown in FIG. 4, the graph compiler 202 includes a graph interface 402 as an example, a selector interface 404 as an example, a workload analyzer 406 as an example, an executable file generator 408 as an example, and a data store as an example. Includes 410, and plug-in 236 of FIG. During operation, any of the graph interface 402, selector interface 404, workload analyzer 406, executable file generator 408, datastore 410, and / or plug-in 236 may communicate via the exemplary communication bus 412. In FIG. 4, the communication bus 412 may be implemented using any suitable communication method and / or device (eg, Bluetooth® communication, LAN communication, WLAN communication, etc.).In the example shown in FIG. 4, the graph interface 402 is a means for graph communication or a graph communication means. In the example of FIG. 4, the selector interface 404 is a means for selector communication or a selector communication means. In the example shown in FIG. 4, the workload analyzer 406 is a means for analyzing the workload or a means for analyzing the workload. In the example of FIG. 4, the plug-in 236 is a conversion means or a conversion means. In the example of FIG. 4, the execution file generation unit 408 is a means for generating an execution file or a means for generating an execution file. In the example of FIG. 4, the data store 410 is a means for storing data or a means for storing data.In the example shown in FIG. 4, the graph interface 402 is implemented by a logic circuit such as a hardware processor. It should be noted that any other type of circuit configuration may be used further or as an alternative, eg, one or more analog or digital circuits, logic circuits, programmable processors, application specific integrated circuits (ASICs), and the like. Programmable logic devices (PLDs), field programmable gate arrays (FPGAs), digital signal processors (DSPs), and the like. Moreover, Graph Interface 402 is implemented by any type of interface standard such as Ethernet interface, Universal Serial Bus (USB), Bluetooth® interface, Near Field Communication (NFC) interface, and / or PCI Express Interface. It's okay. The graph interface 402 is configured to determine if a workload (eg, workload 206 in FIG. 2) is received. In the example disclosed herein, the graph interface 402 may store the workload 206 in the data store 410 if the workload 206 is available.In FIG. 4, the example selector interface 404 is implemented by a logic circuit such as a hardware processor. It should be noted that any other type of circuit configuration may be used further or as an alternative, eg, one or more analog or digital circuits, logic circuits, programmable processors, application specific integrated circuits (ASICs), and the like. Programmable logic devices (PLDs), field programmable gate arrays (FPGAs), digital signal processors (DSPs), and the like. Moreover, selector interface 404 is implemented by any type of interface standard such as Ethernet interface, Universal Serial Bus (USB), Bluetooth® interface, Near Field Communication (NFC) interface, and / or PCI Express interface. It's okay. The selector interface 404 is configured to generate and / or otherwise supply one or more selectors 204 for each workload node in the workload 206 in response to acquiring the workload 206. Moreover, the selector interface 404 is configured to obtain and / or otherwise receive input and / or output conditions from one or more selectors 204. For example, selector interface 404 is configured to acquire input and / or output conditions for each CBB in accelerator 208 (eg, any of the convolution engine 214, MMU216, RNN engine 218, and / or DSP220). In such an operation, the selector interface 404 acquires a generic list of CBBs that specify the input and output conditions that operate the CBB. In another example, selector interface 404 is configured to get input and output conditions for each kernel in accelerator 208 (eg, any kernel in kernel bank 232 and / or any suitable kernel). In such an operation, the selector interface 404 acquires a generic list of the kernel that specifies the input and output conditions that operate the kernel. During operation, the selector interface 404 stores the input and / or output conditions identified by one or more selectors 204 in the datastore 410.In the example shown in FIG. 4, the workload analyzer 406 is implemented by a logic circuit such as a hardware processor. It should be noted that any other type of circuit configuration may be used further or as an alternative, eg, one or more analog or digital circuits, logic circuits, programmable processors, application specific integrated circuits (ASICs), and the like. Programmable logic devices (PLDs), field programmable gate arrays (FPGAs), digital signal processors (DSPs), and the like. The workload analyzer 406 parses the workload nodes included in the workload (eg, workload 206 in FIG. 6). The workload analyzer 406 parses the workload node to identify the input and output conditions used to run the workload node. The workload analyzer 406 may send the parsed workload node to the selector interface 404 for use by one or more selectors 204 and / or to the datastore 410 for use by the plug-in 236.In the example of FIG. 4, the plug-in 236 is implemented by a logic circuit such as a hardware processor. It should be noted that any other type of circuit configuration may be used further or as an alternative, eg, one or more analog or digital circuits, logic circuits, programmable processors, application specific integrated circuits (ASICs), and the like. Programmable logic devices (PLDs), field programmable gate arrays (FPGAs), digital signal processors (DSPs), and the like. During operation, the plug-in 236 selects the selector interface to map the workload node identified by the workload analyzer 406 to the CBB (eg, one of the convolution engine 214, MMU216, RNN engine 218, and / or DSP220). It is configured to communicate with the data stored in the 404, the workload analyzer 406, and the data store 410. For example, plug-in 236 maps and / or allocates workload to the CBB and / or kernel at accelerator 208 based on the identified input and / or output conditions. Further, in such an example, the plug-in 236 acquires the input and output conditions for implementing the workload node and makes such a workload node the same or substantially similar input and / or output. Assign to be executed based on a device that also contains conditions (eg, one of the convolution engine 214, MMU216, RNN engine 218, DSP220, and / or kernel located in kernel bank 232). Thus, the plug-in 236 is for a particular device to which a workload node is assigned (eg, one of the convolution engines 214, MMU216, RNN engine 218, DSP220, and / or kernel located in kernel bank 232). Has no direct knowledge.In some examples disclosed herein, the plug-in 236 uses appropriate AI techniques to view and / or predict which CBB and / or kernel may be assigned a particular workload node. May be implemented. For example, if plug-in 236 had previously assigned a workload node indicating to back up data to a particular CBB, and such workload should have been assigned in the future, then the plug-in would It can be assigned to a particular CBB independent of parsing the data stored in the data store 410.In FIG. 4, the executable file generation unit 408, which is an example, is implemented by a logic circuit such as a hardware processor. It should be noted that any other type of circuit configuration may be used further or as an alternative, eg, one or more analog or digital circuits, logic circuits, programmable processors, application specific integrated circuits (ASICs), and the like. Programmable logic devices (PLDs), field programmable gate arrays (FPGAs), digital signal processors (DSPs), and the like. After the plug-in 236 assigns a workload node to a device that contains similar input and / or output conditions, the executable file generator 408 is configured to generate the executable file 230 of FIG. 2 to be executed by the accelerator 208. Will be done. The executable file generation unit 408 further sends the executable file 230 to the configuration controller 224. Moreover, the executable file generator 408 may generate one or more executable files to be executed by the accelerator 208.In the example shown in FIG. 4, the datastore 410 may be implemented by any device for storing data, such as, for example, a flash memory, a magnetic medium, an optical medium, and the like. Further, the data stored in the example data store 410 is in any data format, such as binary data, comma separated data, tab separated data, structured query language (SQL) structure, and the like. May be good. In FIG. 4, the datastore 410 captures the input and / or output conditions obtained from the selector interface 404, the workload obtained from the graph interface 402 (eg, the workload 206 in FIG. 2), and / or the workload node. It is configured to store input and / or output conditions for execution (eg, input and / or output conditions identified by workload analyzer 406). The datastore 410 may be written and / or read by any of the graph interface 402, the selector interface 404, the workload analyzer 406, the plug-in 236, and / or the executable file generator 408.FIG. 5 is a diagram illustrating a pipeline 500 representing a workload performed using an example first CBB 502 and an example second CBB 504. The first CBB502 and / or the second CBB504 may be the CBB (eg, convolution engine 214, MMU216, RNN engine 218, and / or DSP220) that is an example of FIG. Alternatively, the first CBB502 and / or the second CBB504 may be implemented using any suitable kernel (eg, a kernel located in kernel bank 232). In the example of FIG. 5, the first CBB502 is a producer and the second CBB504 is a consumer. An example pipeline 500 includes an example first workload node 506 and an example second workload node 508. In the example of FIG. 5, the first CBB 502 is configured to execute the first workload node 506. Similarly, the second CBB 504 is configured to execute the second workload node 508. During operation, the example credit manager 510 is configured to supply a first credit value to the first CBB 502 to execute the first workload node 506. For example, the first credit value is five credits (the first data slot available in buffer 512), as such, instructing the first CBB 502 to start execution of the first workload node 506. In FIG. 5, buffer 512 is a circular buffer.In the example shown in FIG. 5, the first workload node 506 is executed by writing to two slots (a subset of data slots) of buffer 512. As such, the first CBB 502 writes to the first two available slots in buffer 512. In response, the first CBB502 sends two credits to the credit manager 510. The credit manager 510 sends two credits to the second CBB 504 when it becomes available. The two credits supplied to the second CBB 504 operate as shown in the second CBB 504 to initiate execution of the second workload node 508. In FIG. 5, the second workload node 508 is executed by reading the next two slots in the buffer 512 in a first-in first-out (FIFO) manner.Examples of how to implement an example graph compiler 202, an example one or more selectors 204, an example selector 300, and / or the accelerator 208 of FIG. 2 are shown in FIGS. 3 and / or 4. On the other hand, one or more of the elements, processes, and / or devices represented in FIGS. 2, 3 and / or 4 are combined, divided, rearranged, omitted, deleted, and /. Alternatively, it may be implemented in any other way. Further, an example CBB analyzer 302, an example firmware analyzer 304, an example compiler interface 306, and / or more generally, an example selector 300 and / or an example of FIG. 2 and / or FIG. One or more selectors 204, example graph interface 402, example selector interface 404, example workload analyzer 406, example executable file generator 408, example datastore 410, example plug-in 236, and / or more generally, a graph compiler 202 as an example in FIGS. 2 and / or 4 and / or a credit manager 210 as an example, a CnC fabric 212 as an example, a convolution engine 214 as an example, eg. MMU216, an example RNN engine 218, an example DSP220, an example memory 222, an example configuration controller 224, an example firmware bank 232, and / or more generally, with the example of FIG. Accelerator 208 may be implemented by hardware, software, or firmware and / or by any combination of hardware, software, and / or firmware. Thus, for example, an example CBB analyzer 302, an example kernel analyzer 304, an example compiler interface 306, and / or more generally, an example selector 300 and / or example of FIG. One or more selectors 204, an example graph interface 402, an example selector interface 404, an example workload analyzer 406, an example executable file generator 408, an example data store 410, an example. Plugin 236 and / or more generally, Graph Compiler 202 as an example in FIGS. 2 and / or 4, and / or Credit Manager 210 as an example, CnC Fabric 212 as an example, Convolution Engine 214 as an example. , An example MMU 216, an example RNN engine 218, an example DSP 220, an example memory 222, an example configuration controller 224, an example kernel bank 232, and / or more generally FIG. Each of the exemplary accelerators 208 is one or more analog or digital circuits, logic circuits, programmable processors, programmable controllers, graphics processing units (GPUs), digital signal processors (DSPs), application-specific integrated circuits ( It may be implemented by an ASIC), a programmable logic device (PLD), and / or a field programmable gate array (FPGA). An example CBB analyzer 302, an example kernel analyzer 304, an example compiler interface 306, when reading any of the devices or system claims of the invention to cover purely software and / or firmware implementation. And / or more generally, a selector 300 as an example and / or one or more selectors 204 as an example, a graph interface 402 as an example, a selector interface 404 as an example, and an example of FIGS. 2 and / or 3. A workload analyzer 406, an example executable file generator 408, an example datastore 410, an example plug-in 236, and / or more generally, an example graph compiler in FIGS. 2 and / or 4. 202 and / or example credit manager 210, example CnC fabric 212, example compile engine 214, example MMU 216, example RNN engine 218, example DSP 220, example memory 222, eg Configuration controller 224, example kernel bank 232, and / or more generally, at least one of the example accelerator 208 in FIG. 2, is a memory containing software and / or firmware, a digital versatile disk. (DVD), Compact Disc (CD), Blu-ray Disc, etc. are expressly defined herein to include non-temporary computer-readable storage devices or storage discs. Furthermore, FIG. 2, FIG. 3 and / or the graph compiler 202 as an example of FIG. 4, one or more selectors 204 as an example, the selector 300 as an example, and / or the accelerator 208 are shown in FIGS. 2, 3 and / Or may include one or more elements, processes and / or devices in addition to or in place of those represented in FIG. 4 and / or elements, processes and devices represented. Any one or more of all of them may be included. As used herein, the expression "communicating with" and its variants include direct communication and / or indirect communication via one or more intermediate components, directly. Does not require physical (eg, wired) or continuous communication, but rather periodic, scheduled, irregular, and / or one-time selective communication. Further included.By example graph compiler 202, example one or more selectors 204, example selector 300, and / or example hardware logic, machine readable instructions, hardware implementation to implement accelerator 208. Flow charts representing state machines and / or any combination thereof are shown in FIGS. 6 and / or 7. A better readable instruction is one or more executable programs executed by a computer processor such as the processor 810 and / or the accelerator 812, which is illustrated below in the processor platform 800 described in connection with FIG. It may be part of an executable program. The program is software stored on a non-temporary computer-readable storage medium such as a CD-ROM, floppy (registered trademark) disc, hard drive, DVD, Blu-ray disc, or memory associated with the processor 810 and / or accelerator 812. Although embodied in, the entire program and / or parts thereof may optionally be executed by devices other than the processor 810 and accelerator 812 and / or embodied in firmware or dedicated hardware. Further, an example program is described with reference to the flowchart shown in FIG. 4, with reference to an example graph compiler 202, an example one or more selectors 204, an example selector 300, and /. Alternatively, many other methods of implementing the accelerator 208 may be used as an alternative. For example, the order of execution of blocks may be changed and / or some of the blocks described may be changed, deleted, or combined. Further, or alternative, any or all of the blocks are one or more hardware circuits (eg, discrete and / or integrated analogs) structured to perform the corresponding operation without running software or firmware. And / or may be implemented by a digital circuit configuration, FPGA, ASIC, comparator, operational amplifier (op-amp), logic circuit, etc.).The machine-readable instructions described herein are one of a compressed format, an encrypted format, a fragmented format, a compiled format, an executable format, a packaged format, and the like. It may be memorized by one or more. The machine-readable instructions described herein are data that can be used to create, manufacture, and / or generate machine-readable instructions (eg, instruction parts, codes, code representations, etc.). May be remembered as. For example, machine readable instructions may be fragmented and stored in one or more storage devices and / or computer devices (eg, servers). Machine readable instructions install, modify, adapt, update, combine, supplement, configure, to make them readable, interpretable, and / or executable directly by computer devices and / or other machines. It may require one or more of decryption, decompression, unpackaging, distribution, relocation, compilation, and so on. For example, machine-readable instructions may be stored in multiple parts that are individually compressed, encrypted, and stored on separate computer devices, and those parts are when decrypted, decompressed, and combined. Form a set of executable instructions that implement a program as described in this application.In another example, machine-readable instructions are stored in a state where they can be read by a computer, but in order to execute the instructions on a particular computer device or other device, a library (eg, dynamic link). It may be necessary to add a library (Dynamic Link Library (DLL)), a software development kit (SDK), an application programming interface (API), and the like. In another example, machine readable instructions may need to be set before the machine readable instructions and / or corresponding programs can be executed in whole or in part (eg, set). Memory, data entry, network address recording, etc.). Thus, the disclosed machine-readable instructions and / or corresponding programs are specific to the machine-readable instructions and / or programs when stored or otherwise stored or during transmission. It is intended to include such machine readable instructions and / or programs, regardless of format or state.The machine-readable instructions described herein can be represented by any past, present, or future instruction language, scripting language, programming language, and the like. For example, machine-readable instructions include the following languages: C, C ++, Java®, C #, Perl, Python, JavaScript®, HyperText Markup Language (HTML), structured query language (HTML). It can be expressed using any of SQL), Swift, and the like.As mentioned above, the example process of FIG. 6 and / or FIG. 7 stores information for any period of time (long term, permanent, brief, temporary buffering, and / or information caching). Non-temporary computer and / or machine readable media such as hard disk drives, flash memory, read-only memory, compact disks, digital versatile disks, caches, random access memory and / or any other storage device or storage disk. It may be implemented using executable instructions stored in (eg, computer and / or machine readable instructions). As used herein, the term "non-transitory computer-readable medium" includes all types of computer-readable storage devices and / or storage devices, and excludes radio signals and excludes transmission media. Explicitly defined to."Including" and "comprising" (and all their forms and tenses) are used herein as non-limiting terms. Thus, the claims include, include, include, include, include, in any form of "comprises, include, comprising," as a preamble or in any claim statement. It should be understood that additional elements, items, etc. may exist, even with the use of including, having, etc., without leaving the corresponding claims or scope of description. is there. As used herein, when the expression "at least" is used, for example, as a transition term in a claim preamble, it "has" and "includes." Just as the word is non-limiting. "And / or" (and / or) is used, for example, in the form of A, B and / or, etc., (1) A only, (2) B only, (3) C only, (4). Refers to any combination or subset of A, B, C such as) A and B, (5) A and C, (6) B and C, and (7) A and B and C. As used herein in the context of describing structures, components, items, objects, and / or things, the expression "at least one of A and B" is (1) at least one A, (2). ) It is intended to refer to an implementation that includes at least one B, and (3) at least one A and at least one B. Similarly, as used herein in the context of describing structures, components, items, objects, and / or things, the expression "at least one of A or B" is (1) at least one A. , (2) at least one B, and (3) at least one A and at least one B, are intended to refer to implementations comprising any of the following. As used herein in the context of describing the implementation or execution of processes, instructions, actions, activities and / or steps, the expression "at least one of A and B" is defined as (1) at least one A, It is intended to refer to an implementation that includes either (2) at least one B, and (3) at least one A and at least one B. Similarly, as used herein in the context of describing the implementation or execution of processes, instructions, actions, activities and / or steps, the expression "at least one of A or B" is (1) at least one. It is intended to refer to an implementation that includes any one of A, (2) at least one B, and (3) at least one A and at least one B.As used herein, singular references (eg, "one" (a), "one" (an), "first", "second", etc.) do not exclude plurals. The term "one" (a or an) refers to one or more of its entities. The terms "one" (a or an), "one or more", and "at least one" can be morally used herein. Moreover, multiple means, elements, or method operations, even if listed individually, may be implemented, for example, by a single unit or processor. Moreover, even if individual features may be included in different examples or claims, they may be combined in some cases, and inclusion in different examples or claims may be feasible and / or a combination of features. Or does not imply that it is not advantageous.FIG. 6 is a process that can be performed to implement the graph compiler 202, selector 300, and / or one or more selectors 204 of FIGS. 2, 3 and / or 4 to generate the executable file 230 of FIG. It is a flowchart showing 600. In the example represented in FIG. 6, graph interface 402 (FIG. 4) determines if workload 206 is received and / or otherwise available (block 602). Process 600 responds to the graph interface 402 determining that workload 206 is not received and / or otherwise unavailable (eg, control of block 602 returns a negative (NO) result). Keep waiting. Alternatively, if the graph interface 402 determines that the workload 206 is received and / or otherwise available (eg, control of block 602 returns a positive (YES) result), then The workload analyzer 406 (FIG. 4) parses the workload 206 to identify the workload node (block 604).In response, selector interface 404 (FIG. 4) generates selectors (eg, one or more selectors 204 in FIG. 2) for each workload node (block 606). The CBB analyzer 302 (FIG. 3) further acquires and / or otherwise identifies the input and output conditions of the relevant CBB (block 608). In response, selector interface 404 determines if all generated selectors have supplied their respective input and / or output conditions and, as such, if there is an additional CBB to analyze. Is determined (block 610). If the selector interface 404 determines that there is more CBB to analyze (control in block 610 returns a positive result), then control returns to block 608. Alternatively, if the selector interface 404 determines that there is no additional CBB to analyze (control of block 610 returns a negative result), then the kernel analyzer 304 (FIG. 3) further determines that of the associated kernel. Acquire and / or otherwise identify input and output conditions (block 612). In response, selector interface 404 determines if all generated selectors have provided their respective input and / or output conditions and, as such, if there is an additional kernel to parse. Is determined (block 614). If the selector interface 404 determines that there is more kernel to analyze (eg, control of block 614 returns a positive result), then control returns to block 612. Alternatively, if the selector interface 404 determines that there is no more kernel to parse (eg, control of block 614 returns a negative result), then plug-in 236 (FIG. 2 and / or FIG. 4). Maps workload nodes to the CBB and / or kernel based on the input and output conditions identified by the selector (eg, one or more selectors 204 in FIG. 1) (block 616).The executable file generation unit 408 (FIG. 4) then generates the executable file 230 (block 618). The executable file generator 408 further sends the executable file 230 to the configuration controller 224 (block 620). In another example disclosed herein, in response to the execution of block 618, the executable file generator 408 is used for later use in an external and / or internal deployment system (eg, system 100 in FIG. 1). , The executable file 230 may be stored in the data store 410. In the example shown in FIG. 6, the graph compiler 202 determines whether to continue operation (block 622). If the graph compiler 202 determines to continue operation (eg, control of block 622 returns a positive result), then control returns to block 602 and graph interface 402 receives workload 206 and / or Determine if it is available differently. For example, the graph compiler 202 may determine that it will continue to operate when additional workload is available and / or when a new CBB and / or kernel is included in accelerator 208.Alternatively, if the graph compiler 202 determines that the operation should not continue (control in block 622 returns a negative result), then process 600 in FIG. 6 terminates. That is, process 600 can be stopped if no more workload is available.FIG. 7 is a flow chart illustrating a process 700 that may be executed to implement the credit manager 210 and / or the configuration controller 224 of FIG. 2 to assist the execution of the executable file 230 of FIG. In FIG. 7, the configuration controller 224 (FIG. 2) determines whether the executable file 230 is received from the graph compiler 202 and / or is otherwise available (block 702). If the configuration controller 224 determines that the executable file 230 is not received and / or otherwise unavailable (eg, control of block 702 returns a negative result), then process 700 continues to wait. Alternatively, if the configuration controller 224 determines that the executable file 230 is received and / or otherwise available (control of block 702 returns a positive result), then the configuration controller 224 Parses the executable file 230 to identify the generated and consumed workload nodes in order to identify each CBB to execute the generated and consumed workload nodes (block 704). In response, the configuration controller 224 sends the generated workload node to the first selected CBB (eg, convolution engine 214) (block 706). Similarly, the configuration controller 224 sends the consumable workload node to a second selected CBB (eg, DSP220) (block 708).In response or in parallel, the credit manager 210 distributes credits to a first selected CBB (eg, convolution engine 214) to initiate execution of the generated workload node (block 710). In some examples disclosed herein, the operation of blocks 706, 708, and / or 710 acts on all generated and / or consumed workload nodes. For example, the credit manager 210 distributes the credits corresponding to all the generated workload nodes to all the corresponding generated CBBs. In such an example, synchronization during runtime is achieved based on communication between the corresponding CBB and / or credit manager 210. Since the credits are sent to and from the credit manager 210, the credit manager 210 determines if the credits are received from the first selected CBB (eg, convolution engine 214) (block 712). If the credit manager 210 determines that no credit has also been sent from the first selected CBB (eg, convolution engine 214) (eg, control of block 712 returns a negative result), then the process 700 keeps waiting. Alternatively, if the credit manager 210 determines that credits have been acquired and / or sent from the first selected CBB (eg, convolution engine 214) (eg, control of block 712 returns a positive result). , The credit manager 210 distributes credits to a second selected CBB (eg, DSP 220) to initiate execution of the consumable workload node (block 714).In response, the credit manager 210 determines if credit is received from a second selected CBB (eg, DSP 220) (block 716). If the credit manager 210 determines that no credit has been acquired or sent from the second selected CBB (eg DSP220) (eg control of block 716 returns a negative result), then process 700 Keeps waiting. Alternatively, if the credit manager 210 determines that the credit has been acquired and / or sent from a second selected CBB (eg, DSP220) (control of block 716 returns a positive result), then the credit Manager 210 distributes credits to the first selected CBB (eg, convolution engine 214) to continue executing the generated workload (block 718).The credit manager 210 determines whether the execution of the workload node (eg, the generated workload node or the consumed workload node) is complete (block 720). In some of the examples disclosed herein, the credit manager 210 may determine if the workload node has completed execution based on counting the generated credits in the buffer. For example, the credit manager 210 should generate 50 credits while the CBB acting as a producer (eg, the first CBB502 in FIG. 5) executes and / or otherwise processes the corresponding workload node. You can know that there is from the executable file 230. Therefore, the credit manager 210 determines that the workload has been executed in response to acquiring and / or otherwise receiving 50 credits from the generated workload node (eg, 1st CBB502). Can be done. If the credit manager 210 determines that the workload node (eg, the generated workload node or the consumed workload node) has not completed execution (control in block 720 returns a negative result), then control blocks. Returning to 712, the credit manager 210 determines if the credit is received from the first selected CBB (eg, convolution engine 214). In another example disclosed in the present application, the execution of the workload node (for example, the generated workload node or the consumed workload node) is not completed (control of block 720 returns a negative result), and the generation is performed. If the credit manager 210 determines that the execution of the workload node is complete, then control may proceed to block 714 to complete the execution of the consumed workload node.Alternatively, if the credit manager 210 determines that the workload node (eg, the generated workload node or the consumed workload node) has completed execution (control of block 720 returns a positive result), then , Configuration controller 224 determines if additional generation and consumption workload nodes are available (block 722). If the configuration controller 224 determines that additional generation and consumption workload nodes are available (eg, control of block 722 returns a positive result), control returns to block 704. Alternatively, if the configuration controller 224 determines that there are no more generated or consumed workload nodes available (eg, control of block 722 returns a negative result), then process 700 is stopped.FIG. 8 is for mounting a graph compiler 202 as an example, one or more selectors 204 as an example, a selector 300 as an example, and / or an accelerator 208 as an example of FIGS. 2, 3 and / or 4. And / or a block diagram of an example of a processor platform 800 (eg, a compile-coupled deployment system) structured to execute the instructions of FIG. Alternatively, in some of the examples disclosed herein, the example graph compiler 202, the example one or more selectors 204, and / or the example selector 300 are compared to the example accelerator 208. It may operate on a separate compilation system (eg, a compilation processor) structured to execute the instructions of FIG. In such an example isolation system operation, accelerator 208 operates to execute an executable in a separate deployment system (eg, a deployment processor) structured to execute the instructions of FIG. 7 compared to the compilation system. It may be possible. The processor platform 800 includes, for example, servers, personal computers, workstations, self-learning machines (eg, neural networks), mobile devices (eg, mobile phones, smartphones, tablets such as iPad®), personal digital assistants (PDAs). ), Internet appliances, game consoles, personal video recorders, set top boxes, headsets or other wearable devices, or any other type of computer device.The example processor platform 800 represented includes a processor 810 and an accelerator 812. The example processor 810 represented is hardware. For example, the processor 810 can be implemented by one or more integrated circuits, logic circuits, microprocessors, GPUs, DSPs, or controllers from some desired family or manufacturer. The hardware processor may be a semiconductor-based (eg, silicon-based) device. Moreover, the accelerator 812 can be implemented, for example, by one or more integrated circuits, logic circuits, microprocessors, GPUs, DSPs, FPGAs, VPUs, controllers, and / or other CBBs from some desired family or manufacturer. is there. The accelerator 812 in the example represented is hardware. The hardware accelerator may be a semiconductor-based (eg, silicon-based) device. In this example, the accelerator 812 is an example credit manager 210, an example CNC fabric 212, an example convolution engine 214, an example MMU 216, an example RNN engine 218, an example DSP 220, an example memory. It implements 222, an example configuration controller 224, and / or an example kernel bank 232. In this example, the processor is an example CBB analyzer 302, an example kernel analyzer 304, an example compiler interface 306, and / or more generally, an example selector 300 and / or FIG. / Or one or more selectors 204 as an example, a graph interface 402 as an example, a selector interface 404 as an example, a workload analyzer 406 as an example, an executable file generator 408 as an example, a datastore 410 as an example, An example plug-in 236 and / or more generally, an example graph compiler 202 in FIGS. 2 and / or FIG. 4, and / or an example credit manager 210, an example CnC fabric 212, an example. Folding engine 214, MMU216 as an example, RNN engine 218 as an example, DSP220 as an example, memory 222 as an example, configuration controller 224 as an example, kernel bank 232 as an example, and / or more generally. The accelerator 208, which is an example of FIG. 2, is implemented.The example processor 810 represented includes local memory 811 (eg, cache). The illustrated processor 810 communicates via bus 818 with main memory, including volatile memory 814 and non-volatile memory 816. Further, the accelerator 812 of the example represented includes a local memory 813 (eg, cache). The accelerator 812 of the example shown communicates via bus 818 with the main memory including the volatile memory 814 and the non-volatile memory 816. Volatile memory 814 includes synchronous dynamic random access memory (SDRAM), dynamic random access memory (DRAM), RAMBUS® dynamic random access memory (RDRAM®) and / or any other type. May be implemented by a non-access memory device. The non-volatile memory 816 may be implemented by flash memory and / or any other desired type of memory device. Access to the main memories 814 and 816 is controlled by the memory controller.The example processor platform 800 represented also includes an interface circuit 820. The interface circuit 820 may be implemented by any type of interface, such as an Ethernet interface, a universal serial bus (USB), a Bluetooth® interface, a Near Field Communication (NFC) interface, and / or a PCI Express interface.In the example shown, one or more input devices 822 are connected to the interface circuit 820. The input device 822 allows the user to enter data and / or commands into the processor 810 and / or the accelerator 812. Input devices can be implemented, for example, by audio sensors, microphones, cameras (stationary or video), keyboards, buttons, mice, touch screens, trackpads, trackballs, isopoints and / or speech recognition systems.One or more output devices 824 are also connected to the interface circuit 820 of the example shown. The output device 824 is, for example, a display device (for example, a light emitting diode (LED), an organic light emitting diode (OLED), a liquid crystal display (LCD), a cathode ray tube display (CRT), an in-place switching (IPS) display. , Touch screen, etc.), tactile output devices, printers and / or speakers. The example interface circuit 820 shown thus typically includes a graphics driver card, a graphics driver chip and / or a graphics driver processor.The example interface circuit 820 shown is also for transmitters, receivers, transceivers, modems, and homes to help exchange data with external machines (eg, computer devices of any kind) over network 826. Includes communication devices such as gateways, wireless access points, and / or network interfaces. Communication may be via, for example, Ethernet connection, Digital Subscriber Line (DSL) connection, telephone line connection, coaxial cable system, satellite system, Line-of-Site wireless system, cellular telephone system, etc. it can.The example processor platform 800 represented also includes one or more mass storage devices 828 that store software and / or data. Examples of such mass storage devices 828 include floppy disk drives, hard drive disks, compact disk drives, Blu-ray disk drives, RAID (Redundant Array of Independent Disks) systems, and digital versatile disk (DVD) drives. is there.The machine-executable instructions 832 of FIG. 6 and / or FIG. 7 are in large-capacity storage device 828, in volatile memory 814, in non-volatile memory 816, and / or in removable non-temporary computer-readable storage such as CD or DVD. It may be stored in a medium.From the above, it will be understood that exemplary methods, devices and products have been disclosed and that they configure heterogeneous components in the accelerator. The disclosed methods, devices and products improve the efficiency of using computer devices by generating and / or providing different selectors for each workload node within the workload. As such, the disclosed methods, devices and products generate executable files without the need for the graph compiler to be individually configured for each heterogeneous computational build block and / or kernel in the accelerator. Make it possible. Moreover, the examples disclosed herein include a credit manager that distributes and / or receives credits from heterogeneous computational construction blocks and / or kernels in accelerators. In this way, the computational construction block and / or kernel can communicate with other heterogeneous computational construction blocks and / or kernel through the center fabric and credit manager. The examples disclosed herein allow the graph compiler to efficiently map workloads (eg, received graphs) for any number of heterogeneous computational construction blocks and / or kernels in the accelerator. Similarly, the examples disclosed in the present application are such that the graph generator is modified or adjusted when additional computational construction blocks and / or kernels are later included in the accelerator, or the current computational construction blocks and / or kernels are modified or adjusted. In some cases, it makes it possible to efficiently map the workload received (eg, graph). The disclosed methods, devices and products are therefore intended for one or more improvements in the functionality of the computer.Examples of methods, devices, systems, and products for configuring heterogeneous components in accelerators are disclosed herein. Further examples and combinations thereof include:Example 1 is a device for setting a heterogeneous component in an accelerator, in which a graph compiler that identifies a workload node in a workload and generates a selector for the workload node, and an input condition and an output condition of a calculation construction block are set. An apparatus having the selector to identify, the graph compiler mapping the workload node to the computational construction block in response to obtaining the identified input and output conditions from the selector. Including.Example 2 includes the device of Example 1, wherein the graph compiler identifies a second workload node in the workload and generates a second selector for the second workload node.Example 3 is the device of Example 2, which includes a device in which the second selector identifies a second input condition and a second output condition of the kernel.Example 4 includes the device of Example 1, wherein the workload is a graph including the workload nodes acquired by the graph compiler.Example 5 includes the device of Example 1, wherein the input condition corresponds to an input requirement of the calculation construction block, and the output condition corresponds to the result of execution of the calculation construction block.Example 6 is the apparatus of Example 1, comprising an apparatus in which the graph compiler generates an executable file in response to mapping the workload node to the computational construction block.Example 7 is the apparatus of Example 1, where the workload is based on the identified input and output conditions so that the graph compiler allows mapping of the workload node to the computational construction block. Includes a device that further includes a plug-in that forms a transformation layer between the node and the computational construction block.Example 8, when executed, identifies a workload node to at least one processor, at least in workload, and for the workload node, a selector associated with a computational build block to execute the workload node. In response to generating, identifying the input and output conditions of the computational construction block, and acquiring the identified input and output conditions, the workload node into the computational construction block. Includes at least one non-temporary computer-readable storage medium that has instructions to perform mapping and execution.Example 9 is at least one non-transitory computer-readable storage medium of Example 8 that, when the instruction is executed, provides the at least one processor with a second workload node in the workload. Includes at least one non-temporary computer-readable storage medium that causes identification and generation of a second selector for the second workload node.Example 10 is at least one non-transitory computer-readable storage medium of Example 9, which, when the instruction is executed, to the at least one processor and further to the kernel's second input condition and second output. Includes at least one non-temporary computer-readable storage medium that causes the condition identification to be performed.Example 11 includes at least one non-transitory computer-readable storage medium of Example 8, wherein the workload is a graph including said workload nodes. ..Example 12 is at least one non-transitory computer-readable storage medium of Example 8, wherein the input condition corresponds to an input requirement of the calculation construction block, and the output condition corresponds to the execution of the calculation construction block. Includes at least one non-temporary computer-readable storage medium that corresponds to the result.Example 13 is at least one non-transitory computer-readable storage medium of Example 8 in which, when the instruction is executed, the at least one processor and the workload node into the computational construction block. Includes at least one non-temporary computer-readable storage medium that causes the generation of an executable file to be performed in response to mapping.Example 14 is at least one non-transitory computer-readable storage medium of Example 8 of the workload node for the at least one processor and further for the computational construction block when the instruction is executed. At least one non-temporary process that causes the formation of a transformation layer between the workload node and the computational build block based on the identified input and output conditions to allow mapping. Includes computer-readable storage media.Example 15 identifies a workload node in a workload, and for the workload node, a compilation means that generates a selection means associated with a computational construction block for executing the workload node, and an input of the computational construction block. Having said selection means to identify conditions and output conditions, said compilation means further maps said workload nodes to said computational construction blocks in response to acquiring said identified input and output conditions. Including the device.Example 16 includes the device of Example 15, wherein the compilation means further identifies a second workload node in the workload and generates a second selection means for the second workload node.Example 17 includes the device of Example 16, wherein the second selection means further identifies a second input condition and a second output condition of the kernel.Example 18 includes the device of Example 15, wherein the workload is a graph including the workload node.Example 19 includes the device of Example 15, wherein the input condition corresponds to an input requirement of the calculation construction block, and the output condition corresponds to the result of execution of the calculation construction block.Example 20 includes the device of Example 15, wherein the compilation means further generates an executable file in response to mapping the workload node to the computational construction block.Example 21 is the apparatus of Example 15, wherein the work is based on the identified input and output conditions so that the compilation means can further map the workload node to the computational construction block. Includes a device that forms a transformation layer between the load node and the computational construction block.Example 22 is a method of setting a heterogeneous component in an accelerator, which relates to identifying a workload node in a workload and, for the workload node, a computational build block for executing the workload node. In response to generating a selector, identifying the input and output conditions of the computational construction block, and acquiring the identified input and output conditions, the workload node becomes the computational construction block. Includes methods with and with mapping.Example 23 includes the method of Example 22, further comprising identifying a second workload node in the workload and generating a second selector for the second workload node.Example 24 is the method of Example 23 and includes a method further comprising identifying a second input condition and a second output condition of the kernel.Example 25 includes the method of Example 22, wherein the workload is a graph including the workload nodes.Example 26 includes the method of Example 22, wherein the input condition corresponds to an input requirement of the calculation construction block, and the output condition corresponds to the result of execution of the calculation construction block.Example 27 includes the method of Example 22, further comprising generating an executable file in response to mapping the workload node to the computational construction block.Example 28 is the method of Example 22, where the workload node and the calculation build are based on the identified input and output conditions so as to allow mapping of the workload node to the calculation build block. Includes methods that further include forming a transformation layer between the blocks.Example 29 is a device that operates a heterogeneous component, which has a buffer containing a large number of data slots, a credit manager, and a first credit value, executes a first workload node, and executes the first workload. A first computational construction block that writes data to a subset of the large number of data slots and sends a second credit value smaller than the first credit value to the credit manager in response to executing the node, and the credit manager. In response to receiving the second credit value from, include a device having a second computational construction block that reads data in the subset of the large number of data slots and executes a second workload node.Example 30 includes the device of Example 29, further including a controller that sends a control message and a setting message to the first computational construction block to supply the first workload node.Example 31 is the apparatus of Example 30, wherein the controller sends the first workload node to the first computational construction block and the second workload node to the second computational construction block. Including.Example 32 is the device of Example 29, further comprising a device for determining whether the credit manager has completed execution of the first workload node.Example 33 includes the device of Example 29, wherein the second calculation construction block further sends a third credit value smaller than the second credit value to the credit manager.Example 34 is the device of Example 33, further comprising a device in which the credit manager further sends the third credit value to the first computational construction block.Example 35, when executed, uses the first credit value in response to executing at least the first workload node on at least one processor and executing the first workload node. In response to writing data to a large number of data slots, sending a second credit value smaller than the first credit value to the credit manager, and receiving the second credit value from the credit manager, the first Includes at least one non-temporary computer-readable storage medium having instructions to read the data in the large number of data slots using the two credit values and to execute the second workload node.Example 36 is at least one non-transitory computer-readable storage medium of Example 35, such as supplying the first workload node to the at least one processor when the instruction is executed. Includes at least one non-temporary computer-readable storage medium that sends control and configuration messages.Example 37 is at least one non-transitory computer-readable storage medium of Example 36, which, when the instruction is executed, further calculates the first workload node on the at least one processor. It includes at least one non-temporary computer-readable storage medium that is sent to the build block and the second workload node is sent to the second computational build block.Example 38 is at least one non-transitory computer-readable storage medium of Example 35, wherein when the instruction is executed, the at least one processor further completes the execution of the first workload node. Includes at least one non-temporary computer-readable storage medium that allows it to be determined.Example 39 is at least one non-transitory computer-readable storage medium of Example 35, wherein the instruction is further delivered to the at least one processor when the instruction is executed with a third credit value less than the second credit value. Includes at least one non-temporary computer-readable storage medium that causes the credit value to be sent to the credit manager.Example 40 is at least one non-transitory computer-readable storage medium of Example 39, wherein when the instruction is executed, the third credit value is further calculated and constructed on the at least one processor. Includes at least one non-temporary computer-readable storage medium to be sent.In Example 41, the first workload node is executed, and in response to the execution of the first workload node, data is written to a large number of data slots using the first credit value, and the data is written from the first credit value. A first calculation means that sends a small second credit value to the credit management means, and in response to receiving the second credit value from the credit management means, the second credit value is used in the large number of data slots. Includes a device having a second computing means that reads the data of and executes the second workload node.Example 42 includes the device of Example 41, further including a control means that sends a control message and a setting message to the first calculation means to supply the first workload node.Example 43 is the apparatus of Example 42, wherein the control means further sends the first workload node to the first calculation means and the second workload node to the second calculation means. Including.Example 44 is the device of Example 41, further comprising a device for determining whether the credit management means has completed execution of the first workload node.Example 45 includes the device of Example 41, wherein the second calculation means further sends a third credit value smaller than the second credit value to the credit management means.Example 46 is the device of Example 45, further comprising a device by which the credit management means further sends the third credit value to the first calculation means.Example 47 is a method of operating a heterogeneous component, in response to executing the first workload node and executing the first workload node, in response to a large number of using the first credit value. The large amount of data in response to writing data to the data slot, sending a second credit value smaller than the first credit value to the credit manager, and receiving the second credit value from the credit manager. Includes a method of reading data in a slot and executing a second workload node.Example 48 includes the method of Example 47, further comprising sending control and configuration messages to the computational construction block to supply the first workload node.Example 49 includes the method of Example 47, further comprising sending the first workload node to the first computational construction block and the second workload node to the second computational construction block.Example 50 includes the method of Example 47, further comprising determining whether the execution of the first workload node is complete.Example 51 includes the method of Example 47, further comprising sending a third credit value smaller than the second credit value to the credit manager.Example 52 includes the method of Example 51, further comprising sending the third credit value to the calculation construction block.Although specific exemplary methods, devices and products have been disclosed herein, the scope of this patent is not limited to them. In contrast, this patent covers all methods, devices and products covered by the claims of this patent.Thereby, the claims that follow are incorporated into this detailed description by this reference, and each claim is independent as a separate embodiment of the present disclosure.100,200 Computer system 102 System memory 104 Heterogeneous system 106 Host processor 108, 140, 308, 412 Communication bus 110, 208, 812 Accelerator 112, 214 Folding engine 114, 218 RNN engine 116, 222 Memory 118, 216 Memory management unit (MMU) 120, 220 DSP122 Controller 124-134, 238-244 Scheduler 136,138 Kernel Library 202 Graph Compiler 204,300 Selector 206 Workload 210,510 Credit Manager 212 Control and Configuration (CnC) Fabric 224 Configuration Controller 226 DMA Units 228,512 Buffer 230 Executable 232 Kernel Bank 233 Data Fabric 234 Configuration and Control Message 236 Plug-in 302 CBB Analyzer 304 Kernel Analyzer 306 Compiler Interface 402 Graph Interface 404 Selector Interface 406 Workload Analyzer 408 Executable File Generator 410 Datastore 500 Pipeline 502,505 CBB506,508 Workload Node 800 Processor Platform 810 Processor |
Method and circuits to create reduced field programmable gate arrays (RFPGA) from the configuration data of field programmable gate arrays (FPGA) are disclosed. The configurable elements of the FPGA are replaced with standard cell circuits that reproduce the functionality of the configured FPGA. Specifically, reduced logic blocks are derived from the configuration data of configurable logic blocks. Similarly, reduced input/output blocks and reduced matrices are derived from the configuration data for input/output blocks and programmable switch matrices of the FPGA, respectively. The reduced logic.blocks are arranged in a similar layout to the original CLBs so that timing relationships remain similar in the RFPGA and FPGA. The actual timing of the RFPGA can be modified by increasing or decreasing the timing delay on various signal paths based on the FPGA design or additional timing constraints. To reduce the time required to generate RFPGAs, a database can be used to contain configurable logic block models and the corresponding reduced logic block models. The database can be expanded as new reduced logic block models are created for configurable logic block models that were not in the database. Similarly, a database can be used for the input/output blocks and programmable switch matrices of an FPGA. |
What is claimed is: 1. A method to convert an FPGA design file for an FPGA having configurable logic blocks, input/output blocks, and programmable switch matrices to form a reduced FPGA, said method comprising:extracting models for a plurality of configurable logic blocks from said FPGA design files; searching a CLB database to find a first corresponding reduced logic block model for each model of the configurable logic blocks; building a new reduced logic block model for each model of the configurable logic blocks when the first corresponding reduced logic block model is not found in the CLB database; and adding the new reduced logic block model to the CLB database. 2. The method of claim 1, wherein each reduced logic block model has an associated shape parameter.3. The method of claim 2, further comprising:evaluating a first shape parameter of the first corresponding reduced logic block model; building a second corresponding reduced logic block model for the model of the configurable logic block when the first shape parameter is not sufficient; and adding the second corresponding reduced logic block model to the CLB database. 4. The method of claim 1, wherein the first corresponding reduced logic block model is internally placed and routed.5. The method of claim 1, further comprisingextracting models for a plurality of the input/output blocks from said FPGA design files; searching an IOB database to find a first corresponding reduced input/output logic block model for each model of the input/output logic blocks; building a new reduced input/output logic block model for each model of the input/output blocks when a corresponding reduced input/output logic block is not found in the IOB database; and adding the new reduced input/output logic block to the IOB database. 6. The method of claim 5, wherein each reduced input/output block model has an associated shape parameter.7. The method of claim 6, further comprising:evaluating a first shape parameter of the first corresponding reduced input/output block model; building a second corresponding reduced input/output block model for the model of the input/output block when the first shape parameter is not sufficient; and adding the second corresponding reduced input/output block model to the IOB database. 8. The method of claim 5, wherein the first corresponding reduced input/output block model is internally placed and routed.9. A method to convert an FPGA design file for an FPGA having configurable logic blocks, input/output blocks, and programmable switch matrices to form a reduced FPGA, said method comprising:extracting models for a plurality of the input/output blocks from said FPGA design files; searching an IOB database to find a first corresponding reduced input/output logic block model for each model of the input/output logic blocks; building a new reduced input/output logic block model for each model of the input/output blocks when a corresponding reduced input/output logic block is not found in the IOB database; and adding the new reduced input/output logic block to the IOB database. 10. The method of claim 9, wherein each reduced input/output block model has an associated shape parameter.11. The method of claim 10, further comprising:evaluating a first shape parameter of the first corresponding reduced input/output block model; building a second corresponding reduced input/output block model for the model of the input/output block when the first shape parameter is not sufficient; and adding the second corresponding reduced input/output block model to the IOB database. 12. The method of claim 9, wherein the first corresponding reduced input/output block model is internally placed and routed. |
BACKGROUND OF THE INVENTION1. Field of the InventionThe present invention relates to integrated circuits (ICs) such as field programmable gate arrays (FPGAs). More specifically, the present invention relates to methods for converting FPGAs into standard cell integrated circuits.2. Discussion of Related ArtFIG. 1 is a simplified schematic diagram of a conventional FPGA 110. FPGA 110 includes user logic circuits such as input/output blocks (IOBs) 160, configurable logic blocks (CLBs) 150, and programmable interconnect 130, which contains programmable switch matrices (PSMs). Each IOB 160 includes a bonding pad (not shown) to connect the various user logic circuits to pins (not shown) of FPGA 110. Some FPGAs separate the bonding pad from the IOB and may include multiple IOBs for each bonding pad. Each IOB 160 and CLB 150 can be configured through configuration port 120 to perform a variety of functions. Configuration port 120 is typically coupled to external pins of FPGA 110 through various bonding pads to provide an interface for external configuration devices to program the FPGA. Programmable interconnect 130 can be configured to provide electrical connections between the various CLBs and IOBs by configuring the PSMs and other programmable interconnect points (PIPS, not shown) through configuration port 120. IOBs can be configured to drive output signals to the corresponding pin of the FPGA, to receive input signals from the corresponding pins of FPGA 110, or to be bi-directional.FPGA 110 also includes dedicated internal logic. Dedicated internal logic performs specific functions and can only be minimally configured by a user. Configuration port 120 is one example of dedicated internal logic. Other examples may include dedicated clock nets (not shown), delay lock loops (DLL) 180, block RAM (not shown), power distribution grids (not shown), and boundary scan logic 170 (i.e. IEEE Boundary Scan Standard 1149.1, not shown).FPGA 110 is illustrated with 16 CLBs, 16 IOBs, and 9 PSMs for clarity only. Actual FPGAs may contain thousands of CLBs, thousands of PSMs, hundreds of IOBs, and hundreds of pads. Furthermore, FPGA 110 is not drawn to scale. For example, a typical pad in an IOB may occupy more area than a CLB, or PSM. The ratio of the number of CLBs, IOBs, PSMs, and pads can also vary.FPGA 110 also includes dedicated configuration logic circuits to program the user logic circuits. Specifically, each CLB, IOB, and PSM contains a configuration memory (not shown) which must be configured before each CLB, IOB, or PSM can perform a specified function. Typically, the configuration memories within an FPGA use static random access memory (SRAM) cells. The configuration memories of FPGA 110 are connected by a configuration structure (not shown) to configuration port 120 through a configuration access port (CAP) 125. A configuration port (a set of pins used during the configuration process) provides an interface for external configuration devices to program the FPGA. The configuration memories are typically arranged in rows and columns. The columns are loaded from a frame register which is in turn sequentially loaded from one or more sequential bitstreams. (The frame register is part of the configuration structure referenced above.) In FPGA 110, configuration access port 125 is essentially a bus access point that provides access from configuration port 120 to the configuration structure of FPGA 110.FIG. 2 illustrates a conventional method used to configure FPGA 110. Specifically, FPGA 110 is coupled to a configuration device 230, such as a serial programmable read only memory (SPROM), an electrically programmable read only memory (EPROM), or a microprocessor. Configuration port 120 receives configuration data, usually in the form of a configuration bitstream, from configuration device 230. Typically, configuration port 120 contains a set of mode pins, a clock pin and a configuration data input pin. Configuration data from configuration device 230 is typically transferred serially to FPGA 110 through a configuration data input pin. In some embodiments of FPGA 110, configuration port 120 comprises a set of configuration data input pins to increase the data transfer rate between configuration device 230 and FPGA 110 by transferring data in parallel. Further, some FPGAs allow configuration through a boundary scan chain. Specific examples for configuring various FPGAs can be found on pages 4-46 to 4-59 of "The Programmable Logic Data Book", published in January, 1998 by Xilinx, Inc., and available from Xilinx, Inc., 2100 Logic Drive, San Jose, Calif. 95124, which pages are incorporated herein by reference.Design engineers incorporate FPGAs into systems due to the flexibility provided by an FPGA. Because FPGAs are programmable and re-programmable, a design engineer can easily accommodate changes to the system specification, correct errors in the system, or make improvements to the system by reprogramming the FPGA. However, once the system design is complete, the flexibility provided by the programmability of an FPGA is sometimes not required. Furthermore, because FPGAs are relatively costly ICs and FPGAs require a configuration device which also increases cost, mass produced systems may not tolerate the cost of including FPGAs. Thus, in some systems that are mass produced, FPGAs used in the design phase of the system are replaced by less costly integrated circuits.Most FPGA manufacturers provide a method to convert an FPGA design into a less costly integrated circuits. For example, some FPGA manufacturers replace the programmable elements of an FPGA with metal connections based on the design file of the FPGA to produce a mask programmed IC. All other circuitry remains the same between the mask programmed IC and the FPGA. The mask programmed IC is cheaper to manufacture than the FPGA and eliminates the need for the configuration device in the mass produced system. However, the mask programmed IC may still be more costly than desired because the semiconductor area, which is a major factor in the cost of an IC, required by the mask programmed IC is nearly the same as the FPGA. Consequently, the manufacturing cost of the mask programmed IC is not significantly cheaper than the FPGA.Some manufacturers use a "sea-of-gates" approach to map an FPGA design into an application specific integrated circuit (ASIC). Specifically, the used CLBs, IOBs, memory cells, and programmable interconnect logic of the FPGA are mapped into corresponding areas of a gate array base. See for example U.S. Pat. No. 5,550,839 entitled "Mask-Programmed Integrated Circuits Having Timing and Logic Compatibility to User-Configured Logic Arrays" and U.S. Pat. No. 5,815,405 entitled "Method and Apparatus for Converting a Programmable Logic Device Representation of a circuit into a second representation of the circuit." However, "sea-of-gates" gate arrays are not well suited to reproduce the extensive routing and other circuits available in an FPGA. Thus, gate array implementation of FPGA designs may prove costly for FPGA designs requiring extensive routing. Hence, there is a need for a method and structure to convert an FPGA design into an integrated circuit which minimizes the cost of the integrated circuit by reducing the size of the integrated circuit.SUMMARYThe present invention replaces FPGAs with cost effective reduced FPGAs (RFPGAs)for high volume production. Specifically, the present invention uses a completed FPGA design file to design a specific RFPGA with all the functionality of the FPGA design. However, the resulting RFPGA can be manufactured using standard cell techniques which greatly reduces the cost of the RFPGA as compared to the FPGA. Furthermore, the present invention minimizes the semiconductor area of the RFPGA which further reduces the cost of the RFPGA. Additionally, the RFPGA can allow device package changes to further reduce the cost of the RFPGA.Specifically, in one embodiment of the present invention, models for the configured configurable logic blocks (CLBs), input/output blocks (IOBs), and programmable switch matrices (PSMs) are extracted from the FPGA design file. Then, a reduced logic block (RLB) model is created for each CLB model. Similarly, a reduced input/output block (RIOB) model is created for each IOB model, and a routing matrix (RM) model is created for each PSM model. Additionally, used dedicated internal logic such as block RAMs and boundary scan are extracted from the FPGA and models for each instance are instantiated into the RPFGA.Specifically, in one embodiment of the present invention, the RFGPA includes a non-uniform array of logic blocks surrounded by a plurality of input/output blocks. An interconnect structure having a plurality of routing matrices connects the various logic blocks within the non-uniform array of logic blocks. The logic block of the non-uniform array of logic blocks are reduced logic blocks which correspond to the configurable logic blocks of an FPGA design. Similarly, the input/output blocks are reduced versions of the IOBs of the FPGA design.In accordance with another embodiment of the present invention, an integrated circuit includes a first plurality of logic circuits, a routing ring surrounding the first plurality of logic circuits, and a second plurality of logic circuits outside the routing ring. The routing ring has an internal routing grid and an external routing grid. The pitch of the internal routing grid and the external routing grid may differ. The first plurality of logic circuits are placed on the internal routing grid while the second plurality of logic circuits are placed on the external routing grid. The routing ring may include a plurality of wires each having a first endpoint on the internal routing grid and a second endpoint on the external routing grid.Furthermore, many embodiments of the present invention control timing of the RFPGA. For example, after the RFGA model is created, the timing characteristics of the RFPGA are extracted and compared to various signal timing constraints. The signal paths which do not satisfy the signal timing constraints are modified to satisfy the signal timing constraints. For example, additional timing buffers or vias may be added to a signal path to increase the timing delay of a signal path. Alternatively, the signal path may be rerouted to decrease the timing delay.Because creation of RLB and RIOB models can be very time consuming, some embodiments of the present invention use a CLB and IOB database to reduce the time required to create the RFPGA. Specifically, a CLB database would contain corresponding RLB models for particular CLB models. If the CLB database does not include a corresponding RLB model, a new RLB model is created and stored in the CLB database. The IOB database would work in a similar manner.The present invention will be more fully understood in view of the following description and drawings.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a simplified block diagram of a conventional FPGA.FIG. 2 is a prior art block diagram of an FPGA configured with a configuration device.FIG. 3 is a simplified block diagram of a reduced FPGA in accordance with one embodiment of the present invention.FIG. 4(a) is another simplified block diagram of a reduced FPGA in accordance with one embodiment of the present invention.FIG. 4(b) is a simplified diagram of the internal and external routing grids of an reduced FPGA.FIG. 5 is a flow diagram of a method to convert an FPGA design file into a reduced FPGA in accordance with one embodiment of the present invention.FIG. 6 is a block diagram of a conventional configurable logic block (CLB).FIG. 7 is a block diagram of a conventional function generator.FIGS. 8(a)-8(d) are schematic diagrams of a reduced function generators in accordance with one embodiment of the present invention.FIG. 9 is a simplified block diagram of a non-uniform array of logic blocks used to illustrate formation of reduced logic blocks (RLBs) in accordance with one embodiment of the present invention.FIG. 10 is a simplified diagram of conventional programmable switch matrix (PSM).FIG. 11(a) is a simplified diagram of a configured programmable switch matrix (PSM).FIGS. 11(b)-11(f) are simplified diagrams of routing matrices (RM) in accordance with one embodiment of the present invention.FIG. 12(a) is a simplified block diagram of a conventional input/output block (IOB).FIG. 12(b) is a simplified block diagram of a reduced input/output block (RIOB) in accordance with one embodiment of the present invention.FIG. 13 is a block diagram of a reduced FPGA in accordance with one embodiment of the present invention.FIG. 14 is a flow diagram of a method to convert an FPGA design file into a reduced FPGA in accordance with one embodiment of the present invention.DETAILED DESCRIPTIONIn accordance with the present invention, FPGA designs are converted into integrated circuits that are formed using standard cell libraries. Each part of the FPGA is reduced so that the standard cell implementation of the FPGA requires only a small amount of semiconductor area as compared to a standard FPGA or standard gate array. FIG. 3 is a simplified schematic diagram of a reduced FPGA (RFPGA) 300 in accordance with one embodiment of the present invention. As used herein, "reduced FPGA" refers to a integrated circuit which performs the function of the FPGA design file but can be manufactured using standard cell libraries. Specifically, in RFPGA 300, CLBs are replaced with reduced logic blocks (RLBs) 350, IOBs are replaced by reduced input/output blocks (RIOBs) 360, programmable interconnect is replaced with reduced interconnect 330, in which PSMs are replaced by routing matrices (RM). Furthermore, dedicated logic such as digital lock loops (DLLs), boundary scan logic, and configuration access ports, can be replaced by circuits, such as CAP 325, boundary scan 370, and DLL 380, having equivalent logic functions but tailored for the RFPGA. Specific reduction techniques for CLBs, IOBS, and programmable interconnect 130, including PSMs, are discussed below. Generally, each CLB, IOB, and PSM is individually reduced to use as little semiconductor area as possible. The reduced parts are then arranged in the RFPGA in the same basic arrangement as the original FPGA. Thus, a RFPGA contains an array of non-uniform RLBs, interspersed with a non-uniform array of RMs and a non-uniform reduced interconnect surrounded by non-uniform RIOBs. However some embodiments of the present invention use the same amount of area for each RLB to allow a uniform array of RLBs. Furthermore, some embodiments of the present invention may combine programmable features with RLBs, RIOBs, and RMs to create partially configurable RFPGAs. In these embodiments, the portions of the FPGA which must remain programmable can be built directly from the FPGA design while the remaining portions can be converted to RFGPA elements as described below.Each row of RLBs has a height equal to the tallest RLB in the row. Similarly, each column of RLBs has a width equal to the width of the widest RLB in the column. The spacing between different rows and columns of the RFPGA can vary depending on the semiconductor area required by reduced interconnect 330 and the various routing matrices. Like the RLBs, the width and height required by the rows and columns of RIOBs is dictated by the tallest and widest RIOB of each row and column, respectively. Furthermore, in some embodiments, the width and height required by the rows and columns of RIOBs is also dependent of the width and height of the RLB columns and rows.FIG. 4(a) is a simplified schematic diagram of a RFPGA 400, in accordance with a second embodiment of the present invention. RFPGA 400 differs from RFPGA 300 by including a routing ring 440. In some embodiments, routing ring 440 begins as allocated area on RFPGA 300. Later, a routing process can be used to fill the allocated area of routing ring 440 with wire paths connecting the RLBs to RIOBs.However, in other embodiments, routing ring 440 allows the RIOBs to use a different routing pitch than the components of RFPGA 400 within routing ring 440. As shown in FIG. 4(b), routing ring 440 has an internal routing grid 442 that matches the routing pitch used by the RLBs, RMs, and reduced interconnect. Routing ring 440 also has an external routing grid 444 that matches the routing pitch of the RIOBs. For example, the routing pitch of the internal routing grid may be larger or smaller than the routing pitch of the external routing grid. Furthermore, the internal and external routing grids may have the same routing pitch but are not aligned. Also shown in FIG. 4(b) are routing ring wires 445, 446, and 447, which illustrate disparate pitch components on internal routing grid 442 and external routing grid 444 being connected by straight line connections in accordance with one embodiment of the present invention. Furthermore, because routing ring 440 allows different routing pitches to be used in RFPGA 400, the reduction of the RIOBs can be optimized for one routing grid and the reduction of the other components of RFPGA 400 can be optimized for a second routing grid. Furthermore, by having separate routing grids for the RIOBs and the RLBs, RIOBs corresponding to unused IOBs can be removed from RFPGA 400.FIG. 5 is a flow diagram for a method to convert an FPGA design 505 into a RFPGA. In FIG. 5, dashed arrows represent information flow between steps. Solid arrows represent process flow which may also include information flow. Each step can be performed as soon as all the necessary information is provided from other steps. For clarity, the techniques used to resolve timing issues in RFPGAs are discussed with respect to FIG. 14 after discussion of the conversion of an FPGA to an RFPGA. The configuration data for IOBs, CLBs, and the programmable interconnect including the PSMs (i.e. the routing information), are extracted in IOB extraction step 512, CLB extraction step 515, and routing extraction step 518, respectively. Extraction of configuration data for IOBs, CLBs, and routing information is well known in the art of FPGA programming. For example, a "compile away" method for extracting configuration data for IOBs is described by Baxter in U.S. Pat. No. 5,815,405 entitled "Method and Apparatus for Converting a Programmable Logic Device Representation of a Circuit into a Second Representation of the Circuit." Other well known methods, such as "instantiate only required components," can also be used. IOB extraction step 512, CLB extraction step 515, and routing extraction step 518 can be performed in parallel or in series. After extracting the configuration data for IOBs and CLBs, IOB models and CLB models are generated in IOB model generation step 522 and CLB model generation step 525, respectively. Specifically, in IOB model generation step 522, a model for each IOB is generated using well known techniques, such as those described in U.S. Pat. No. 5,815,405. Similarly, a model for each CLB is generated using well known techniques, such as those described in U.S. Pat. No. 5,815,405, in CLB model generation step 525.Once the models for the CLBs and IOBs are built, an estimate for the area required by each RIOB, RLB, RM, and reduced interconnect 330 is derived in area evaluation step 533. By calculating an approximation for the area of each RLB, each RM, and reduced interconnect, an optimal grid pitch can be determined for the non-uniform array of RLBs. However, this optimal grid pitch must obey the silicon device layout rules used to manufacture the RFPGA. Similarly, by calculating an approximation for the area of each RIOB, an optimal grid pitch can be determined for the RIOBs surrounding routing ring 440. In one embodiment, the grid pitch is chosen to fit the largest row and column. Then the sizes of the other rows and columns are adjusted to fit the chosen grid pitch. In another embodiment, a variable grid pitch is determined by the actual size of each row and column. After determining the optimal grid pitch, the routing grid for the non-uniform array of RLBs inside routing ring 440 (FIG. 4(b)) and the routing grid for the RIOBs outside of routing ring 440 are built in a build grid step 537. In some embodiments of the present invention, the vertical pitch may differ from the horizontal pitch. This non-square pitch is often used to take advantage of additional metal layers that were either unused or unavailable in the original FPGA design.Using the IOB models and the area approximations for RIOBs, a model for each RIOB is created in a build RIOB models step 542. Similarly, using CLB models and the area approximations for RLBs, a model for each RLB is created in a build RLB models step 545. Specific methods and techniques in accordance with the present invention to reduce CLBs into RLBs and to reduce IOBs into RIOBs are described below. The draw list for the reduced interconnect, including the RMs, is produced in a derive draw list step 547. Methods and techniques used to form reduced interconnect 330 from programmable interconnect 130 are described below.After deriving the RIOB models, the components of each RIOB model are conceptually placed in a layout design and the RIOB model is internally routed in a place & route RIOB internals step 554. Similarly, after deriving the CLB models, the components of each RLB model are conceptually placed and the RLB model is internally routed using in a place & route RLB internals step558.Then the RLB models are arranged relative to each other for placement on a die. Specifically, techniques for placing and routing semiconductor devices from models are well known in the art and are not discussed in detail herein. However, because the structure of an RFPGA is very similar to the original FPGA, relative placement of blocks from the FPGA can be used for almost all components of the RFPGA. Routing simply follows the optimized routing for the original FPGA. For example, the relative placement of the RLBs to each other is the same as relative placement of the corresponding CLBs to each other. For example, if two RLBs correspond to adjacent CLBs, the two RLBs are placed adjacent to each other. However, optimizations to reduce area are possible due to the elimination of unused structures of the original FPGA. The RLBs are placed on the internal routing grid and interconnected by routing reduced interconnect 330 in a route reduced interconnect step 565. As described below, most embodiments of reduced interconnect 330 are metal wires and vias. Consequently, conventional routing techniques can be used with the draw list of reduced interconnect 330. In general, special tools are not required because the actual relative coordinates of each wire segment are known and used, with possible changes due to different routing grids.For embodiments of the present invention using routing ring 440, e.g. RFPGA 400, routing ring 440 is built in build routing ring step 570. Routing ring 440 is made up of simple wiring connections between points on the internal routing grid and points on an external grid pitch. Specifically, each wire in routing ring 440 has a first endpoint on the internal routing grid and a second endpoint on the external routing grid. The locations of the first endpoints of the wires in routing ring 440 are dictated by the placement of the RLBs and the locations of the second endpoint are dictated by the placement of the RIOBs. In accordance with one embodiment of the present invention, standard routing tools can be used by defining a routing grid in routing ring 440 equal to the lowest common multiple of both the internal routing grid and the external routing grid. After building the routing ring, the connections from the RLBS are routed to routing ring 440 in route RLBs to ring step 575. Because routing ring 440 is used to connect RLBs to nearby RIOBs, the connections between the RLBs and RIOBs can be formed without crossing. Therefore, direct (e.g. straight lines) wiring paths can be used in routing ring 440 (as illustrated in FIG. 4(b). Accordingly, the wiring in routing ring 440 may be formed using a single metal layer. However, other embodiments of the present invention may take advantage of the three-dimensional nature of silicon devices to use other types of connections in routing ring 440. For example, some wires in an FPGA may be twisted in various manners due to the limitation of the routing channels of an FPGA. These twisted wires may be untwisted to further reduce the area required by the RFPGA by using routing ring 440. Furthermore, in some embodiments of the present invention, vias, and other active circuits, such as timing buffers, may be added to the wiring paths to increase the propagation delay of the wiring path. Some embodiments may also increase the net length of a wiring path to increase the capacitance of the wiring path to increase the propagation delay of the wiring path.Then, the RIOBs are placed around routing ring 440 in a place RIOB step 580. The RIOBS are routed to routing ring 440 in a route RIOBs to ring step 585. Lastly, the RFPGA is finished by adding the outer most boundary zone, which is used to place scribe lines accurately on a silicon wafer during manufacturing. This last step is performed in a add die demarcation line step 587. At this point, the RFPGA design is complete.For embodiments of the present invention which do not include routing ring 440, e.g. RFPGA 300, build routing ring step 570, route RLBs to ring stop 575, and route RIOBs to ring step 585 are omitted. In these embodiments the RIOBs and routed directly to the appropriate RLBs.Typically a quality assurance(QA)/design rule check step 590 is performed to evaluate the RFPGA design and insure that the RFPGA obeys the semiconductor processing rules of the semiconductor technology which will be used to manufacture the RFPGA. If quality assurance(QA)/design rule check step 590 detects an error in the RFPGA design, processing returns to build RIOB model step 542, build RLB model step 545, or derive draw list step 547, depending on whether the problem occurred in an RIOB, an RLB, or an RM, respectively. After, quality assurance(QA)/design rule check step 590 is satisfied, a suitable package is selected for the RFPGA in select packaging step 594. Because the RFPGA is smaller and may require less pins than the FPGA, a smaller and less expensive package can be used for the FPGA. Actual RFPGAs are produced using conventional standard cell techniques, which are well known in the art, in a manufacture RFPGAs step 595.AREA EVALUATIONArea evaluation step 533 is generally a three part process. First each instance of the IOBs and CLBs is evaluated to determine the approximate area value required by the corresponding RIOB and RLB. Second, each instance of the PSMs is evaluated to determine the area requirements of the corresponding RM. Third, an overall evaluation for the area required by the RFGPA is performed.For each instance of an IOB or CLB, the area required by the components (i.e. at the gate or transistor level) needed to implement the function of the IOB or CLB is determined. In some embodiment of the present invention, multiple alternative implementations of the functions of an IOB or CLB are available. For example, a particular CLB may have four different gate/transistor RLB designs that each implement the function of the CLB. The different implementations will have different height and width requirements. Picking the gate/transistor RLB design with the minimum area may not lead to the smallest overall area for the RFPGA. Thus, some embodiments of the present invention choose the gate/transistor RLB design based on the height and width of other RLBs in the row or column to minimize the overall area required for the RFPGA. Next, the approximate area required for the RIOB or RLB is determined by multiplying the area required by the components by a guard band factor. Typically the guard band factor is determined experimentally and is used to include an approximate value of the area required for internally routing the RLB or RIOB. Even though RFPGA 300 is implemented using standard cell technology, the RLBs and RIOBs are still somewhat tiled in RFPGA 300. Typically, RLBs and RIOBs are formed using rectangular shapes, however, tetragonal shapes, i.e. polygons formed with only right angles, as well as, other polygonal shapes can be used to reduce the area required by the RLBs and RIOBs.Once the shape is selected, the size of the shape must be determined. For example, if rectangular shapes are used, the width and length of the rectangular shape must be determined for each RLB and RIOB. Typically, it is desirable to minimize the overall size of the shape used for each RLB and RIOB. However, other criteria may be used. For example, some embodiments of the present invention determine the size of the shape to minimize the interconnect overhead. By allowing the size of the shape to be set by the required interconnect, the smallest size for that shape is possible. After the shape and size of an RLB or RIOB is determined, interconnect points are placed around the shape in a standardized method so that the RLBs and RIOBs can interconnect in a known manner. In one embodiment the layout of pins of the various models of the RFPGA is in the same physical order as the corresponding pins of the FPGA, but on a reduced area basis.In some embodiments of the present invention, the size of the RLBs are not determined individually. Specifically, the size of each RLB is determined with the goal of minimizing the width of each RLB column and the height of each RLB row. Thus, the height of an RLB is set by the minimum height of the tallest RLB in the row. Similarly, the width of an RLB is set by the minimum width of the widest RLB in the row. During layout of the RLBs or RIOBs alternative shapes may be chosen to optimize the area requirement of the row or column.Once the RLB and RIOBs shapes and sizes have been determined, the area required by the RMs can be determined. Many factors may dictate the area required by the RMs. For example, factors may include the number of metal layers used by the RMs, the maximum length of an unbuffered interconnect line, the minimum distance between adjacent interconnects, and other technology specific rules. Furthermore, in some embodiment of the present invention the area for the RMs may need to be expanded to include buffers for timing matching. Each RM is then evaluated to determine which wires and vias can be removed as well as which wires and vias can be moved. Specific examples of reducing the size of each RM is given below in detail.Once the area required by each RIOB, RLB, and RM is determined, the area required by an RFPGA can be determined. As illustrated in FIGS. 3, 4(a) and 4(b), the RIOBs are arranged in a rectangular ring shape, whereas the RLBs and the RMs are contained in a rectangular shape within the ring of RIOBs. In some embodiments, the area of the ring of RIOBs is increased to accommodate routing ring 440 (FIG. 4(a)). If the area required by the rectangular shape of the RLBs and RMs is greater than the area within the ring if RIOBs, the area for the ring of RIOBs is increased.In some embodiment, the RIOBs are treated as two horizontal rows and two vertical columns. Thus, the area within the RIOB ring can be increased by increasing the length of the horizontal rows or vertical columns to accommodate the area requirements of the RLBs and RMs. In other embodiments of the present invention, the area within the RIOB ring is not rectangular. For example, some embodiments use other polygonal shaped areas.REDUCTION OF CLBs TO RLBsAs stated above, each CLB of the FPGA design is individually reduced into an RLB during formation of the RFPGA. Similarly, each component of a CLB is individually reduced to form the RLB. FIG. 6 shows a simplified block diagram of a conventional CLB 600. The present invention is applicable to a variety of CLBs. CLB 600 is merely one example of a CLB that can be used with the present invention. CLB 600 includes a function generator 610, function generators 620 and 640, selector circuits 630, 650, 680, and 690, and flip-flops 660 and 670. CLB 600 performs a variety of logic functions based on the configuration of function generators 610, 620, and 640 and selector circuits 630, 650, 680, and 690. CLB 600 receives input signals I[1:H], G[1:4], and F[1:4]. As used herein, signal names referring to multiple signals are referred to as NAME[X:Y], each individual signal is referred to as NAME[Z]. CLB 600 drives output signals Q1, O[1:J], and Q2.Function generator 610 can be configured to perform any four-input logic function using input signals G[1], G[2], G[3], and G[4]. Similarly, function generator 620 can be configured to perform any four-input logic function using input signals F[1], F[2], F[3], and F[4]. Although FIG. 6 shows CLB 600 with input signals coming from the left and output signals going to the right, actual CLB layouts in FPGAs may have input signals and output signals on any side of the CLB.Selector circuit 630 can be configured to select input signals for function generator 640 from input signals I[1:H] and the output signals of function generators 610 and 620. Function generator 640 can be configured to perform any two-input logic function using input signals from selector circuit 630. Typically, selector circuit 630 is formed using one or more multiplexers having selection input terminals coupled to configuration memory cells (not shown).Selector circuit 650, which is also typically formed using one or more multiplexers having selection input terminals coupled to configuration memory cells, can be configured to select various signals from input signals I[1:H] and the output signals of function generators 610, 620, and 640. Selector circuit 650 drives output signals O[1:J] as well as input signals to flip flop 660, selector circuit 680, selector circuit 690, and flip flop 670. Flip flops 660 and 670 provide registered output signals to selector circuit 680, and 690 respectively. Depending on the specific implementation of CLB 600, flip-flops 660 and 670 may have a variety of configurable clocking options. Selector circuit 680 can be configured to select either the output signal of flip flop 660 or a signal from selector 650 to drive as output signal Q1. Similarly, selector circuit 690 can be configured to select either the output signal of flip flop 680 or a signal from selector 650 to drive as output signal Q2. Selector circuits 680 and 690 are typically formed using a multiplexer having an selection input terminal coupled to a configuration memory cell.In the conversion of CLB 600 into an RLB, the configuration data for CLB 600 is analyzed to determine how the selector circuits are configured. Once configured, the selector circuits are essentially wired paths. Thus, selector circuits can be replaced by metal and/or semiconductor buffers plus wire paths and vias in RLBs. Flip flops 660 and/or 680 are eliminated from an RLB, if the FPGA design file does not use flip-flops 660 and/or 680. Otherwise, the used flip flops 660 and/or 680 are included in the RLB. The configuration circuitry of the CLB is eliminated. However, in some embodiments of the present invention, RFPGA are partially configurable. Thus, some RLBs may still contain configuration circuits. In general, the area of an RLBs can be reduced from the area of a standard CLBs by eliminating selector circuits, configuration circuits, and unused circuits. In addition, the area required by the function generators of CLB 600 can be substantially reduced as described below with respect to FIG. 7 and 8.FIG. 7 shows a conventional embodiment of function generator 610. The present invention is applicable to a variety of function generators. Function generator 610 comprises a decoder 710, a memory array 720, and a multiplexer 730. Memory array 720 is a 16 bit memory, which is addressed by input signals G[1:4]. Decoder 710 decodes input signals G[1:4] to enable one of the 16 memory bits of memory array 720 to write a value. Multiplexer 730, which is controlled by input signals G[1:4] selects one of the memory bits of memory array 720 to drive an output signal OUT . In some FPGA designs, function generator 610 (FIG. 6) is used only as a four-input logic function. For these FPGA design files, configuration data is stored in memory array 720 to provide the proper output for the 16 possible values of input signals G[1:4]. In other FPGA designs, function generator 610 is configured to act as a random access memory unit. In these FPGA designs, the input terminal of each memory cell is configurably coupled to other user circuits, i.e., CLBs, IOBs, and PSMs.In one embodiment, if the CLB model uses function generator 610 as a logic function, the RLB model replaces decoder 710, memory array 720, and multiplexer 730 with a single multiplexer. As illustrated in FIG. 8(a), if function generator 610 implements a four-input logic function, a 16 input multiplexer replaces function generator 610. The input terminals of multiplexer 810 are coupled to logic high or logic low depending on the function implemented by function generator 610. As illustrated in FIG. 8(b), if function generator 610 implements a three input logic function, an eight input multiplexer 820 replaces function generator 610. Similarly, if function generator 610 implements a two-input logic function, a four input multiplexer 830 (FIG. 8(c)) replaces function generator 610. Similarly, if function generator 610 implements a one-input logic function, a two input multiplexer 840 (FIG. 8(d)) replaces function generator 610. In some embodiments, rather than using multiplexers, the logic function is directly implemented using logic gates, which require less area.For CLB models using function generator 610 as a memory array, the RLB model must also contain decoder 710, memory array 720, and multiplexer 730. However, some configuration circuits within function generator 610 can be removed which reduces the size of function generator 610. Furthermore, if function generator 610 is used as memory, function generator 610 may be replaced with a compiled RAM cell using well known techniques.Rather than generating each RLB model during conversion of a FPGA to an RFPGA, some embodiments of the present invention use a database to retrieve a RLB model which corresponds to a CLB model. For example, the manufacturer of an FPGA may create a database including RLB models for every possible CLB configuration in the FPGA. Using such a database would reduce the time necessary to create an RFPGA. However, creation of the database would be very time consuming. Thus, some embodiments of the present invention use a combined approach. Specifically, a partial database is created. If the CLB model is already in the database, then the corresponding RLB model is retrieved. However, if the CLB model is not in the database, then an RLB model is created and stored in the database. In addition, many databases include multiple RLB models for each CLB model. The various RLB models would have different shape parameters. Having multiple differently shaped RLB models for each CLB model allows better optimization of the RFPGA. Thus, in some embodiments, in addition to determining whether the CLB model is in the database, the shape parameter of the a corresponding RLB model is evaluated. If the shape parameter is not sufficient, then a second corresponding RLB model is created and stored in the database. The various database approaches can also be applied to individual elements within the CLB. Furthermore, these databases can also be extended to include RM models and RIOB models.FIG. 9 illustrates that the height of each RLB row is equal to the height of the tallest RLB. Specifically, the height of RLB row 930 is equal to the height of the RLB 933 (i.e., the tallest RLB in RLB row 930). Thus, in some embodiments of the present invention, after the RLB models are formed, RLB 933 may be rearranged to shorten RLB 933.To shorten RLB 933, RLB 933 is typically widened to provide additional semiconductor area. By shortening RLB 933, the height of RLB row 930 is decreased and the semiconductor area required by the non-uniform array of RLBs is reduced. However, RLB 933 may not be widened beyond the width of RLB 943 (i.e. the widest RLB of RLB column 940) without expanding the width of RLB column 940. The same principle of rearranging RLBs to save semiconductor area within the non-uniform array of RLBs, can be applied to thinning the widest RLB of a column by increasing the height of the widest RLB. In one embodiment of the present invention, a multi-pass area evaluation mechanism is employed to optimize the height and width by adjusting and readjusting each RLB.In addition, if an FPGA design does not use any CLBs in a row of CLBs, the RFPGA need not include the unused row. Similarly, if an FPGA design does not use any CLBs in a column of CLBs, the RFPGA need not include the unused column. Thus, to optimize conversion to RFPGAs, FPGA design tools can be modified to attempt to pack an FPGA design into a corner of the CLB matrix of the FPGA to maximize the number of unused rows and columns of CLBs, which can be omitted in the RFPGA. Other embodiments may attempt to maximize the number of unused rows and columns of CLBs by moving CLBs. The unused rows and columns can be removed in the RFPGA.REDUCTION OF THE PROGRAMMABLE INTERCONNECTAs stated above, programmable interconnect 130 of FPGA 100 (FIG. 1) is replaced with reduced interconnect 330 (FIG. 3). However, some embodiments of the present invention, may choose to reroute part or all of programmable interconnect 130 rather than converting programmable interconnect 130 into reduced interconnect 330. The disadvantage of rerouting include possible changes to the timing relationship of signals in programmable interconnect 130 and reduced interconnect 330.To convert programmable interconnect 130 to reduced interconnect 330, the PSMs of programmable interconnect 130 are replaced by RMs (routing matrices). With the replacement of PSMs with RMs, the wires forming reduced interconnect 330 are minimized due to the placement of the RMs. FIG. 10 shows a conventional PSM 1000. PSM 1000 comprises 8 programmable interconnect points (PIPs) 1010, 1020, 1030, 1040, 1050, 1060, 1070, and 1080. Each PIP 10X0 is coupled to a left wire L_X, a right wire R_X, a top wire T_X, and a bottom wire B_X, where X is an integer from 1 to 8, inclusive. Each PIP 10X0 contains 6 pass transistors. Each pass transistor is coupled between two of the wires coming into PIP 10X0. The gate of the pass transistors are coupled to configuration memory cells (not shown). Thus, PIP 10X0 can configurably couple left wire L_X, right wire R_X, top wire T_X, and bottom wire B_X together in any combination.Typically, in a standard cell integrated circuit, such as an RFPGA, vertical wires are one layer of the integrated circuit and horizontal wires are on a second layer of the integrated circuit. Thus, in conversion of PSM to RMs, PIPs are replaced with vias if a horizontal wire (i.e. left wires and right wires) is coupled to a vertical wire (i.e., top wires and bottom wires). The configuration memories, pass transistors, and unused wires of the PSM are removed. Thus, replacing the PIPs programmable interconnect 130 with the vias of reduced interconnect 330 greatly reduces the area required by reduced interconnect 330 as compared to programmable interconnect 130.FIGS. 11(a)-11(d) illustrates methods to convert a PSM into an RM. In FIG. 11(a), PSM 1000 is configured so that PIP 1020 couples top wire T_2 to right wire R_2. PIP 1030 is configured to couple top wire T_3 to bottom wire B_3 and left wire L_3. PIP 1040 is configured to couple bottom wire B_4 to right wire R_4. PIP 1060 is configured to couple top wire T_6 to right wire R_6. PIP 1070 is configured to couple top wire T_7 to bottom wire B_7. PIPs 1010, 1050, and 1080 are not used.As shown in FIG. 11(b), in an RFPGA, the unused PIPs and wire are removed, and the used PIPs are replaced with vias or simple wiring. Specifically, PIPs 1010, 1050, and 1080 are removed. Similarly, top wires T_1, T_4, T_5, and T_8, bottom wires B_1, B_2, B_5, B_6, and B_8, left wires L_1, L_2, L_4, L_5, L_6, L_7, and L_8, and right wires R_1, R_3, R_5, R_7, R_8 are removed. Vias 1120, 1130, 1140, and 1160 replace PIPs 1020, 1030, 1040, and 1060, respectively. PIP 1070 is also removed and top wire T_7 and bottom wire B_7 are treated as a single wire 1171 in RM 1190. For convenience and clarity a coordinate system is provided on FIGS. 11(b)-11(d). Specifically, an X coordinate increases from left to right at a rate of 1 for each possible wire channel, a Y coordinate increases from bottom to top at a rate of 1 for each possible wire channel, and via 1120 is defined as the origin. Coordinates are given in the format (X,Y). Thus via 1120 has a coordinate of (0,0).After the removal and replacement of PSM components with RM components, the area of RM 1190 can be reduced as compared to the area of PSM 1000. As illustrated in FIG. 11(c), excess area in RM 1190 is removed by moving the vias and wires as close together as possible. Specifically, via 1160 is moved to coordinate (3,3). Consequently, top wire T_6 is moved to have an X coordinate value equal to 3 and right wire R_6 is moved to have a Y coordinate value equal to 3. After moving top wire T_6 , wire 1171 can be moved to have an X coordinate value equal to four. Thus after reduction, RM 1190 has a height equal to four wire channels and a width equal to five wire channels (in contrast to a height and width of eight wire channels as required without reduction as shown in FIG. 11(b)).In some embodiment of the present invention, an optimization is performed to further reduce the area of RM 1190. As illustrated in FIG. 11(d), via 1140 can be moved to coordinate (2,1), which moves right wire R_4 to have a Y coordinate value equal to one, without causing any short circuits. Similarly, via 1160 can be moved to coordinate (2,2) and top wire T_6 can be moved to have an X coordinate value equal to two without causing any short circuits. In addition, after moving top wire T_6 and via 1160, wire 1171 can be moved to have an X coordinate value equal to three. Thus by optimizing the positioning of vias 1140 and 1160, the height of RM 1190 is reduced to three wire channels and the width of RM 1190 is reduced to four wire channels.A method to reduce the area of an RM in accordance with one embodiment of the present invention is to begin by defining a corner via, e.g. the leftmost and bottommost via, as the origin. The next nearest via (or wire if a wire has no via) is them moved to an integer coordinate which minimizes the distance between the via and the origin without causing short circuits. Wires coupled to the via are moved with the via. If two possible coordinates are equally close to the origin, then either coordinate can be chosen. However, if minimizing height or width has priority, then the coordinate which minimizes the priority dimension should be chosen. When moving a wire such as wire 1171, the distance between the wire and the origin is defined as the length of the line which is perpendicular to the wire and connects to the origin. This process is repeated for each via or wire in the RM.Using FIGS. 11(a)-(d) as an example, via 1120 is initially defined as the origin. The goal is to move via 1130 closer to the origin without causing a short circuit. Because via 1130 can not be moved to either coordinate (0,1) or coordinate (1,0), via 1130 remains at coordinate (1,1). Then attempt to minimize the distance of via 1140 to the origin. As illustrated in FIG. 11(d), via 1140 can be moved to coordinate (2,1) which is closer to the origin than coordinate (2,2) (the initial coordinate of via 1140) without causing any short circuits. Thus, via 1140 is moved to coordinate (2,1). Then, minimize the distance of via 1160, as shown in FIG. 11(d), by placing via 1160 at coordinate (2,2), which minimizes the distance of via 1160 to the origin without causing short circuits. Finally, move wire 1171 to have an X coordinate of three, which minimizes the distance between the origin and wire 1171Because reduced interconnect 330 connects a large number of RLBs both vertically and horizontally, moving vias and wires in one RM may create complications for adjacent RMs. FIG. 11(e) illustrates the connections between RM 1190 and an RM 1191. Specifically, RM 1191 has a via 1192 coupled to via 1160, a via 1194 not coupled to any vias of RM 1190, a via 1196 coupled to via 1140, a via 1197 not coupled to any via of RM 1190, and a via 1199 coupled to via 1120. As illustrated in FIG. 11(f), after optimization of RM 1190 and RM 1191, the various vias coupled between RM 1190 and RM 1191 may not be in the same wire channel. For example, via 1160 is two wire channels below via 1192. Diagonal routing as illustrated in FIG. 11(f) can be used to couple the vias together. Furthermore, diagonal routing may also be used within each RM. In other embodiments, additional optimization such as swapping the Y coordinates of via 1196 and via 1197 may be used to reduce the need for diagonal routing. However, swapping coordinates of vias may cause additional complications. Thus, an iterative method involving multiple passes of moving and swapping vias and wires can be used to optimize the placement of vias and wires in creating the reduced interconnect. Furthermore, in some embodiments of the present invention, additional metal layers are available for use in the construction of reduced interconnect 330. These additional metal layers may include the use of diagonal wires to further optimize the area required for the reduced interconnect.REDUCTION OF IOBs TO RIOBsLike RLBs, RIOBs are produced by removing unused components and reducing each used component of an IOB by removing configuration circuits. FIG. 12(a) shows a simplified block diagram of a conventional IOB 1200. The present invention is applicable to a variety of IOBs. IOB 1200 is merely one example of an IOB which can be used with the present invention. IOB 1200 includes selector circuits 12101230, and 1280, an output flip-flop 1220, an input flip-flop 1270, an output buffer 1240, an input buffer 1260, and a bonding pad 1250. IOB 1200 can be configured to receive data from and/or to drive data to bonding pad 1250. Furthermore, IOB 1200 can be configured to register both outgoing and incoming data using output flip-flop 1220 and input flip-flop 1270, respectively. Some embodiments of IOB 1200 may also include enable/disable circuits for output buffer 1240.Specifically, various output signals O[1:M] are received by selector circuit 1210. Selector circuit 1210 is configured to drive an output signal to output flip-flop 1220 and selector circuit 1230. Output flip-flop 1220 is configured to register the signal from selector circuit 1210. Selector circuit 1230 is configured to drive either an output signal from output flip-flop 1220 or the output signal from selector circuit 1210 to output buffer 1240. If IOB 1200 is an output block or an input/output block, then output buffer 1240 is configured to drive the signal from selector circuit 1230 to bonding pad 1250 using an appropriate external voltage and current.If IOB 1200 is an input block or an input output block, then data signals from outside the FPGA are received on bonding pad 1250. Input buffer 1260 converts the signals on bonding pad 1250 to an appropriate internal voltage and current and provides an input signal to input flip-flop 1270 and selector circuit 1280. Input flip-flop 1270 is configured to register the input signal from input buffer 1260. Selector circuit 1280 is configured to drive input signals I[1:N], with either the input signal from input buffer 1260 or the output signal of input flip-flop 1270.In converting IOB models to RIOB models, unused components are removed. Thus, the area required by an RIOB is reduced as compared to the area of an IOB. For example, if IOB 1200 is used exclusively as an input block, the corresponding RIOB model would not include selector circuits 1210, output flip-flop 1220, selector circuit 1230 or output buffer 1240. Conversely, if IOB 1200 is used exclusively as an output block, the corresponding RIOB model would not include input buffer 1260, input flip-flop 1270, or selector circuit 1280. Furthermore, if IOB 1200 is not configured to use registered input signals or registered output signals, input flip-flop 1270 or output flip-flop 1220 is removed, respectively. Typically, selector circuits 1210, 1230, and 1280 are formed using one or more multiplexers having selection input terminals coupled to configuration memory cells. Thus, in the RIOB model, selector circuits 1210, 1230, and 1280 are replaced with the appropriate wiring nets dictated by the configuration of the selector circuits. FIG. 12(b) shows a model for a registered bidirectional RIOB 1202 having flip-flops 1220 and 1270, buffers 1240 and 1260, and bonding pad 1250. Note that selector circuits 1210, 1230 and 1280 and any associated wiring paths (FIG. 12(a)) are removed from RIOB 1202. Thus, the area required by an RIOB is reduced as compared to the area of an IOB.For some RFPGAs, the area required by the RLBs, RMs and reduced interconnect 330 (FIG. 4(a)) may be much less than the area inside routing ring 440 because the size of routing ring 440 is also dictated by the RIOBs. In these cases, the benefits of forming RLBs, RMs, and reduced interconnect 330 may be lost since much of the area of the RFPGA is wasted. One method to reduce the size of the routing ring 440 is to eliminate RIOBS that correspond to unused IOBs. Thus, for example if an FPGA has 240 IOBs, but the FPGA design only uses 160 IOBs, then the RFPGA need only include 160 RIOBs around routing ring 440. As shown in FIG. 13, deletion of RIOBs may cause misalignment of RIOBs and the corresponding RLBs. Specifically in FIG. 13, RIOB 1345 corresponds to RLB 1340; however, due to the reduction process RIOB 1345 is not aligned with RLB 1340. In accordance with the present invention, because RLB 1340 is coupled to routing ring 440 and RIOB 1345 is coupled to routing ring 440 rather than directly to each other, the misalignment of RIOB 1345 and RLB 1340 does not impact the reduction process of RIOB 1345 and RLB 1340. The misalignment is handled within routing ring 440. As explained above, routing ring 440 comprises wires which are defined to connect a point on the inner routing grid and a point on the outer routing grid. If misalignments are severe, then the thickness on any given side of routing ring 440 may be increased to accommodate more wiring channels.RFPGA TIMING ISSUESThe internal timing of an FPGA may differ from the internal timing of a corresponding RFPGA. Specifically, although the speed of the logic circuits generally do not change significantly, the propagation delays in RFPGAs are less than the propagation delays of the FPGA. Thus, in general, an RFPGA performs faster than a corresponding FPGA. In most cases, faster performance is desirable. Thus, timing issues may not need to be addressed unless specific additional timing constraints must be met.FIG. 14 shows a flow diagram illustrating a method to convert an FPGA into an RFPGA and to adjust timing in the RFPGA in accordance with one embodiment of the present invention. Because FIG. 14 is similar to FIG. 5, the same reference numerals are used in FIG. 14 for similar steps. For brevity, the description of these similar steps is not repeated. In the method of FIG. 14, an FPGA timing extraction step 1419 is performed to ascertain the internal timing information for FPGA design 505.Adjustment of the internal of the RPFGA is performed in an iterative manner. Initially, the RPFGA model is formed as described above with respect to FIG. 5 up to and including route RIOBs to ring step 585. However, some initial timing buffers may be included in place and route timing buffers step 1459 due to additional timing constraints 1450 provided by the user. The internal timing of the RFPGA is extracted in RFPGA timing extraction step 1487.The internal FPGA timing is compared to the internal RPFGA timing in timing comparison step 1488. Some embodiments allow additional timing constraints 1450 to be placed on the RFPGA. If the internal timing of the RFPGA matches the internal timing of FPGA design 505 and satisfies additional constraints 1450, then processing continues as described above in add die demarcation step 587. Otherwise, adjust timing step 1489 attempts to remedy the problem detected by timing comparison step 1488 by adjusting the timing buffers and placement of components.Thus, place and route timing buffer step 1459 must be performed again using the new data from adjust timing step 589. In some embodiments, additional optimization to allow different RLBs and RIOB models to increase or decrease time delays may also be used. Similarly, adjust placement step 1465 is performed to accommodate the changes. In some embodiments, the size and shape of the models may be modified to achieve the desired timing requirements. Thus, the RIOB or RLB models may be rebuilt in build RIOB model step 542 or build RLB models step 545. Then, the RFPGA model is recreated and timing is again compared as explained above.When additional timing constraints step 1450 is not included, the internal timing of the RFPGA can be made to closely match the internal timing of FPGA design 505. Thus, the RFPGA can be a direct replacement for the FPGA in the user's system. The inclusion of additional timing constraints step 1450 allows the user to have flexibility to adjust all, some, or none of the data paths of an RFPGA to fine tune the timing of the RFPGA to maximize performance of the RFPGA as well as the user's system that includes the RFPGA.GENERAL METHODOLOGY FOR DEVELOPING RFPGA TOOLSIn general, the methodology of creating RFPGAs tools can be approached in stages to permit faster implementation of the tools. The first stage is to reuse the existing FPGA implementation for CLBs and IOBs. However, the unneeded programming elements would be removed and the routing structure would be replaced using standard cell techniques.The second stage replaces specific FPGA elements with smaller elements. For example, IOBs are replaced with RIOBs, and CLBs are replaced with RLBs. As explained above, some or all of the RIOBs and RLBs may be in preexisting databases rather than generated each time an FPGA is converted to an RFPGA. If CLBs or IOBs are encountered that do not have corresponding RLBs or RIOBs in the database, then a new RLB or RIOB model is created and can be added to the database.In the third stage, various other improvements such as iterative area evaluation can be added to further reduce area. For example, as described above, the shape and size of an RLB may be repeatedly recalculated based on the size and shapes of other RLBs. Thus, the shape and size of an RLB can be iteratively calculated based on the shape and size of other RLBs. Furthermore, the shape and size of an RLB may also be recalculated based on the interconnect, RIOBs, and routing ring. Thus, the area of any component of the RFPGA can be iteratively calculated based on the shape and size of other components in the RFPGA. Accordingly, multiple levels of iterative area evaluation can used to minimize the area required by the RFPGA.Thus, different RFPGA tools for creating RFPGAs can be created in a staged manner. Simpler tools to use the some of principles of the present invention can be first implemented to allow users quick access to the benefits of RFPGAs. Then additional tools can be implemented to use advanced principles of the present invention to reduce the size and cost of RFPGAs. Iterative tools can then be added to further minimize the size and cost of RFPGAs.In the various embodiments of this invention, methods and structures have been described to convert an FPGA design into an RFPGA. Specifically, a model for each component, such as CLBs, IOBs, and PSMs, of the FPGA is extracted from the FPGA design file. The CLBs, IOBs, and PSMs are individually reduced to form RLBs, RIOBs, and RMs, respectively. The area required by the reduced components of the RFPGA is less than the area of the equivalent component of the FPGA. Thus, the semiconductor area required for an RFGA is less than the semiconductor area of the FPGA. Furthermore, the RFPGA is manufactured using standard cell libraries and does not require configuration. Therefore, the cost to use an RFPGA is greatly reduced as compared to an FPGA.The various embodiments of the structures and methods of this invention that are described above are illustrative only of the principles of this invention and are not intended to limit the scope of the invention to the particular embodiments described. For example, in view of this disclosure, those skilled in the art can define other ICs, standard cells, logic blocks, FPGAs, CLBs, IOBs, PSMs, RLBs, RIOBs, RMs, routing rings and so forth, and use these alternative features to create a method, circuit, or system according to the principles of this invention. Thus, the invention is limited only by the following claims. |
A computer-implemented method of converting a circuit design for a programmable logic device (PLD) to a standard cell circuit design can include unmapping a PLD circuit design to a gate level netlist (110), mapping logic gates of the netlist to functionally equivalent standard cells (120), and including the standard cells within the standard cell circuit design (125). Design constraints for the standard cell circuit design can be automatically generated (135, 140). The design constraints for the standard cell circuit design can be output (145). |
CLAIMS What is claimed is: 1. A method of converting a programmable logic device (PLD) circuit design to a standard cell circuit design, the method comprising: unmapping a PLD circuit design to a gate level netlist; mapping logic gates of the netlist to functionally equivalent standard cells; including the standard cells within the standard cell circuit design; automatically generating design constraints for the standard cell circuit design according to the PLD circuit design; and outputting the design constraints for the standard cell circuit design. 2. The method of claim 1 , further comprising: placing and routing the standard cell circuit design according to the design constraints for the standard cell circuit design; and outputting the standard cell circuit design. 3. The method of claim 1 , wherein the PLD circuit design is specified as a bitstream. 4. The method of claim 1 , wherein unmapping comprises decomposing lookup tables into logic gates. 5. The method of claim 1 , further comprising inserting, into the standard cell circuit design, each hard intellectual property block used by the PLD circuit design. 6. The method of claim 1 , wherein generating design constraints for the standard cell circuit design comprises: extracting signal path delays from the PLD circuit design; and setting each signal path delay as a maximum signal path delay for a corresponding signal path of the standard cell circuit design. 7. The method of claim 1 , wherein generating design constraints for the standard cell circuit design comprises: identifying explicit timing constraints of the PLD circuit design; and translating the explicit timing constraints from a format used by the PLD circuit design to a format used by the standard cell circuit design. 8. The method of claim 1 , wherein generating design constraints for the standard cell circuit design comprises, for a selected clock domain of the standard cell circuit design, setting a hold time constraint according to a signal path having a shortest delay between a timing start point and a timing end point within a clock domain of the PLD circuit design that corresponds to the selected clock domain. 9. The method of claim 1 , wherein the PLD circuit design comprises a first clock domain and a second clock domain and the standard cell circuit design comprises a first clock domain corresponding to the first clock domain of the PLD circuit design and a second clock domain corresponding to the second clock domain of the PLD circuit design, wherein generating design constraints further comprises setting a hold time constraint for a signal beginning at a timing start point within the first clock domain and ending at a timing end point within the second clock domain of the standard cell circuit design according to a signal path having a shortest delay of signals having a timing start point in the first clock domain and having a timing end point in the second clock domain of the PLD circuit design. 10. The method of claim 1 , wherein generating design constraints for the standard cell circuit design comprises: identifying pins of a target PLD that are used by the PLD circuit design; constraining the standard cell circuit design to include only pins corresponding to pins of the target PLD that are used by the PLD circuit design, wherein unused pins of the target PLD are not implemented within the standard cell circuit design; andconstraining pin placement of the standard cell circuit design to match pin placement of the PLD circuit design. 11. The method of claim 1 wherein the PLD design is implemented in a simulated circuit. 12. A method of converting a circuit design for a programmable logic device (PLD) to a standard cell circuit design, the method comprising: determining a gate level netlist from a binary representation of a circuit design for a PLD; mapping logic gates of the netlist to standard cells; including the standard cells within the standard cell circuit design; inserting each hard intellectual property block used by the PLD circuit design into the standard cell circuit design; generating timing constraints for the standard cell circuit design according to timing information extracted from the PLD circuit design; placing and routing the standard cell circuit design according to the timing constraints for the standard cell circuit design; and outputting the standard cell circuit design. 13. The method of claim 12, wherein generating timing constraints for the standard cell circuit design comprises: using input/output signal path delays of the PLD circuit design as maximum signal path delays for corresponding signal paths of the standard cell circuit design; and translating non-input/output signal path delays of the circuit design for the PLD from a first format to a second format used by the standard cell circuit design. 14. The method of claim 12, wherein generating timing constraints for the standard cell circuit design comprises, for a clock domain of the standard cell circuit design, setting a hold time constraint according to a signal path having a shortest delay between a timing start point and a timing end point within a corresponding clock domain of the PLD circuit design. 15. The method of claim 12, wherein the PLD circuit design comprises a first clock domain and a second clock domain and the standard cell circuit design comprises a first clock domain corresponding to the first clock domain of the PLD circuit design and a second clock domain corresponding to the second clock domain of the PLD circuit design, wherein generating design constraints further comprises setting a hold time constraint for a signal beginning at a timing start point within the first clock domain and ending at a timing end point within the second clock domain of the standard cell circuit design according to a signal path having a shortest delay of signals having a timing start point in the first clock domain and having a timing end point in the second clock domain of the PLD circuit design. 16. The computer-implemented method of claim 12, further comprising: identifying pins of a target PLD that are used by the PLD circuit design; constraining the standard cell circuit design to include only pins corresponding to pins of the target PLD that are used by the PLD circuit design, wherein unused pins of the target PLD are not implemented within the standard cell circuit design; and constraining pin placement of the standard cell circuit design to match pin placement of the PLD circuit design. 17. A computer program product comprising: a computer-usable medium comprising computer-usable program code that converts a circuit design for a programmable logic device (PLD) to a standard cell circuit design, the computer-usable medium comprising: computer-usable program code that unmaps a PLD circuit design to a gate level netlist;computer-usable program code that maps logic gates of the netlist to functionally equivalent standard cells; computer-usable program code that includes the standard cells within the standard cell circuit design; computer-usable program code that automatically generates design constraints for the standard cell circuit design according to the PLD circuit design; and computer-usable program code that outputs the design constraints for the standard cell circuit design. 18. The computer program product of claim 17, further comprising: computer-usable program code that places and routes the standard cell circuit design according to the timing constraints for the standard cell circuit design; and computer-usable program code that outputs the standard cell circuit design. 19. The computer program product of claim 18, further comprising: computer-usable program code that inserts, into the standard cell circuit design, each hard intellectual property block used by the PLD circuit design; computer-usable program code that uses signal path delays of the circuit design for the PLD as maximum signal path delays for corresponding signal paths of the standard cell circuit design; and computer-usable program code that translates signal path delays of the circuit design for the PLD from a first format to a second format used by the standard cell circuit design. 20. The computer program product of claim 18, wherein the computer-usable program code that generates design constraints for the standard cell circuit design comprises computer-usable program code that, for a selected clock domain of the standard cell circuit design, sets a hold time constraint according to a signal path having a shortest delay between a timing start point and a timing end point within a clock domain of the PLD circuit design that corresponds to the selected clock domain. 21. The computer program product of claim 18, wherein the PLD circuit design comprises a first clock domain and a second clock domain and the standard cell circuit design comprises a first clock domain corresponding to the first clock domain of the PLD circuit design and a second clock domain corresponding to the second clock domain of the PLD circuit design, wherein the computer-usable program code that generates design constraints further comprises computer-usable program code that sets a hold time constraint for a signal beginning at a timing start point within the first clock domain and ending at a timing end point within the second clock domain of the standard cell circuit design according to a signal path having a shortest delay of signals having a timing start point in the first clock domain and having a timing end point in the second clock domain of the PLD circuit design. 22. The computer program product of claim 18, wherein the PLD circuit design is implemented in a simulated circuit. |
CREATING A STANDARD CELL CIRCUIT DESIGN FROM A PROGRAMMABLE LOGIC CIRCUIT DESIGN FIELD OF THE INVENTION The embodiments disclosed herein relate to integrated circuit devices (ICs). More particularly, the embodiments relate to creating a standard cell circuit design from a programmable logic circuit design. BACKGROUND OF THE INVENTION Custom circuit designs are often prototyped within integrated circuit devices having configurable logic, such as programmable logic devices (PLDs). One common type of PLD within which many circuit designs are prototyped is the field programmable gate array (FPGA). The high level of programmability offered by PLDs makes such devices well suited to development efforts. Generally, PLDs are a more costly option for implementing a circuit design once efforts move from development to production. Often, a standard cell implementation of the circuit design, e.g., an application specific integrated circuit (ASIC) implementation, is more cost effective in production than using a PLD. To reduce costs, circuit designs initially developed using a PLD can be converted into standard cell circuit designs or structured standard cell circuit designs. The conversion process, however, involves several manual steps and can introduce errors into the resulting standard cell circuit design. For example, hard intellectual property (IP) cores available on the PLD, which are used by the PLD circuit design, are not available as standard cells. These cores typically are protected and unavailable to third parties. As such, replacement or alternate IP blocks must be manually selected for use in the standard cell circuit design. In addition, the conversion process typically begins with a register transfer level (RTL) description of the PLD circuit design. The RTL circuit description is a high level circuit description that is above a gate-level description of the circuit design. The high level RTL description of the PLD circuit design undergoes the entire synthesis process using available timing constraints. The nature of some PLDs, e.g., FPGAs, is that many signal paths are not constrained. Unless manually or explicitly specified, the high level RTL is synthesized without timingrequirements for such signal paths. In consequence, the standard cell circuit design may have significantly different timing characteristics than the PLD circuit design from which it was generated. SUMMARY OF THE INVENTION The embodiments disclosed herein relate to an automated technique for creating a standard cell circuit design from a configurable, or programmable, logic circuit design. One embodiment of the present invention can include a computer-implemented method of converting a configured circuit design for a programmable logic device (PLD) to a standard cell circuit design. The method can include unmapping a PLD circuit design to a gate level netlist, mapping logic gates of the netlist to functionally equivalent standard cells, and including the standard cells within the standard cell circuit design. Design constraints for the standard cell circuit design can be automatically generated according to the PLD circuit design. The design constraints for the standard cell circuit design can be output. The computer-implemented method can include placing and routing the standard cell circuit design according to the design constraints for the standard cell circuit design and outputting the standard cell circuit design. In one embodiment, the PLD circuit design can be specified as a bitstream or in another binary form. Unmapping the PLD circuit design can include decomposing look-up tables (LUTs) into logic gates. The computer-implemented method further can include inserting, into the standard cell circuit design, each hard intellectual property (IP) block used by the PLD circuit design. Generating design constraints for the standard cell circuit design can include extracting signal path delays from the PLD circuit design and setting each signal path delay as a maximum signal path delay for a corresponding signal path of the standard cell circuit design. Generating design constraints for the standard cell circuit design also can include identifying explicit timing constraints of the PLD circuit design and translating the explicit timing constraints from a format used by the PLD circuit design to a format used by the standard cell circuit design.Generating design constraints for the standard cell circuit design can include, for a selected clock domain of the standard cell circuit design, setting a hold time constraint according to a signal path having a shortest delay between a timing start point and a timing end point within a clock domain of the PLD circuit design that corresponds to the selected clock domain. The PLD circuit design can include a first clock domain and a second clock domain. The standard cell circuit design can include a first clock domain corresponding to the first clock domain of the PLD circuit design and a second clock domain corresponding to the second clock domain of the PLD circuit design. Accordingly, generating design constraints further can include setting a hold time constraint for a signal beginning at a timing start point within the first clock domain and ending at a timing end point within the second clock domain of the standard cell circuit design according to a signal path having a shortest delay of signals having a timing start point in the first clock domain and having a timing end point in the second clock domain of the PLD circuit design. Generating design constraints also can include identifying pins of a target PLD that are used by the PLD circuit design and constraining the standard cell circuit design to include only pins corresponding to pins of the target PLD that are used by the PLD circuit design. Unused pins of the target PLD are not implemented within the standard cell circuit design. Pin placement of the standard cell circuit design also can be constrained to match pin placement of the PLD circuit design. Another embodiment of the present invention can include a computer- implemented method of converting a circuit design for a PLD to a standard cell circuit design. The method can include determining a gate level netlist from a binary representation of a circuit design for a PLD, mapping logic gates of the netlist to standard cells, including the standard cells within the standard cell circuit design, and inserting each hard IP block used by the PLD circuit design into the standard cell circuit design. Timing constraints for the standard cell circuit design can be generated according to timing information extracted for the PLD circuit design. The standard cell circuit design can be placed and routed according to the timing constraints for the standard cell circuit design. The standard cell circuit design can be output.Generating tinning constraints for the standard cell circuit design can include using input/output (I/O) signal path delays of the PLD circuit design as maximum signal path delays for corresponding signal paths of the standard cell circuit design and translating non-l/O signal path delays of the circuit design for the PLD from a first format to a second format used by the standard cell circuit design. Generating timing constraints for the standard cell circuit design can include, for a clock domain of the standard cell circuit design, setting a hold time constraint according to a signal path having a shortest delay between a timing start point and a timing end point within a corresponding clock domain of the PLD circuit design. In one aspect, the PLD circuit design can include a first clock domain and a second clock domain. The standard cell circuit design can include a first clock domain corresponding to the first clock domain of the PLD circuit design and a second clock domain corresponding to the second clock domain of the PLD circuit design. Accordingly, generating design constraints further can include setting a hold time constraint for a signal beginning at a timing start point within the first clock domain and ending at a timing end point within the second clock domain of the standard cell circuit design according to a signal path having a shortest delay of signals having a timing start point in the first clock domain and having a timing end point in the second clock domain of the PLD circuit design. The computer-implemented method can include identifying pins of a target PLD that are used by the PLD circuit design and constraining the standard cell circuit design to include only pins corresponding to pins of the target PLD that are used by the PLD circuit design. Unused pins of the target PLD are not implemented within the standard cell circuit design. Pin placement of the standard cell circuit design can be constrained to match pin placement of the PLD circuit design. Yet another embodiment of the present invention can include a computer program product including a computer-usable medium having computer-usable program code that, when executed by a data processing system, causes the data processing system to perform the various steps and/or functions disclosed herein.BRIEF DESCRIPTION OF THE DRAWINGS Fig. 1 is a flow chart illustrating a method of transforming a circuit design for a programmable logic device into a standard cell circuit design in accordance with one embodiment of the present invention. DETAILED DESCRIPTION While the specification concludes with claims defining the features of the invention that are regarded as novel, it is believed that the invention will be better understood from a consideration of the description in conjunction with the drawing. As required, detailed embodiments of the present invention are disclosed herein; however, it is to be understood that the disclosed embodiments are merely exemplary of the invention, which can be embodied in various forms. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the inventive arrangements in virtually any appropriately detailed structure. Further, the terms and phrases used herein are not intended to be limiting but rather to provide an understandable description of the invention. The embodiments disclosed herein relate to automatically creating a circuit design using standard cells from a configured circuit design for an integrated circuit device featuring programmable logic. One such device is a programmable logic device (PLD) which is referred to throughout this description for illustrative purposes. A configured circuit design, implementable in configurable logic is referred to in this description as a PLD circuit design. In accordance with the inventive arrangements disclosed herein, a PLD circuit design can be converted into a standard cell circuit design such as an application specific integrated circuit (ASIC). Rather than taking a behavioral description of the PLD circuit design, such as a high level register transfer level (RTL) circuit description, and processing the behavioral description through a standard cell-based implementation flow, information can be extracted from the implementation file(s) of the PLD circuit design. This information can be applied to the standard cell circuit design. It is noted here that the term "standard cell" is often, though not exclusively, used herein to refer to elements of an integrated circuit device thatare formed in hardware during device fabrication. A PLD circuit is a circuit that is formed by configuring and loading specific bitstreams into registers in a programmable logic device PLD. A PLD circuit can exist as sequence of configuration bits in a configuration memory, as a fully configured programmable logic device, or solely as a simulation in a simulation computer. Circuit components of the PLD circuit design can be decomposed into constituent logic gates and mapped to standard cells of a standard cell library. Information relating to pin configuration, timing, and other aspects of the PLD circuit design can be extracted and used to generate design constraints for the standard cell circuit design. The standard cell circuit design can be placed and routed utilizing the design constraints generated from the information extracted from the PLD circuit design. Fig. 1 presents a flow chart illustrating a computer-implemented method of converting a circuit design for a PLD into a standard cell circuit design in accordance with one embodiment of the present invention. The method illustrated in the Fig. 1 can be implemented using any of a variety of computer- based electronic design automation (EDA) tools for developing circuit designs for implementation within PLDs or as standard cell circuit designs, e.g., as ASICs. The embodiments disclosed herein can be implemented by a data processing system executing suitable operational software, e.g., one or more EDA tools. In step 105, a PLD circuit design that is to be converted into a standard cell circuit design can be identified. The PLD circuit design can specify complete placement and routing information. For example, in one embodiment, the PLD circuit design can be specified as a post synthesis netlist which includes physical placement information. In another embodiment, the PLD circuit design can be specified in binary format, e.g., a format that is not readable by a human being. For example, the PLD circuit design can be a bitstream that, when loaded into a target PLD, configures the PLD to implement or instantiate the circuit design. In another example, the PLD circuit design can be one or more implementation files specifying complete placement and routing information that may be translated or further processed into a bitstream. In the case of a circuit design to be implemented within a target FPGA of the type available from Xilinx, Inc. of San Jose, California, the PLD circuit design can be a Native Circuit Description (NCD) file. An NCD file represents the physical circuit description ofthe design as it applies to a specific target device. An NCD file can specify complete placement and routing information and may be translated into a bitstream. In step 110, the PLD circuit design, e.g., the binary files specifying the PLD circuit design, can be unmapped into a gate level netlist. In unmapping the PLD circuit design, various tasks can be performed. One task is that logic blocks can be decomposed into logic gates. The logical expression implemented by each logic block, whether a complex logic block or a single LUT, can be unmapped, or decomposed, into a group of one or more constituent logic gates. As used herein, the phrase "complex logic block" can refer to a portion of the circuit design that is implemented using a plurality of programmable circuit elements, e.g., LUTs and flip-flops. Examples of complex logic blocks can include carry chains or distributed random access memories. When taken collectively, the group of logic gate(s) can implement the same logical expression, or functionality, as the decomposed logic block. It should be appreciated that in decomposing each logic block, the connectivity of the resulting logic gates can be specified such that the functionality of the group of logic gates is the same as the decomposed logic block, whether a single LUT or a complex logic block, from which the group of logic gates was derived. Another task is that logic blocks can be identified and tracked. An association between each logic block and the logic gates into which the logic block is decomposed can be created. Accordingly, a group of one or more logic gates, into which a complex logic block has been decomposed, can be annotated or otherwise identified as implementing a particular function, e.g., as a carry chain. The annotation or description associated with the logic gates can be used to map such logic gates to a suitable standard cell. As is known, a standard cell library can include standardized, programmatic descriptions of circuit components such as AND gates, OR gates, inverters, multiplexers, and the like. Each programmatic description of a circuit structure or component, whether simple or complex, can be referred to as a "standard cell." When incorporated into a circuit design and subjected to an implementation flow, e.g., synthesis, mapping, placement, and routing, the standard cell instantiates the specified hardware within the target device.It should be appreciated that by decomposing structures such as LUTs and complex logic blocks into constituent logic gates, the size of the circuit design can be reduced from that used within the PLD circuit implementation. More particularly, the physical area needed to implement the circuit design using an ASIC, as compared to a PLD, can be reduced. Such is the case as the circuitry traditionally incorporated into the logic blocks, e.g., LUTs and/or complex logic blocks, to achieve programmability in the PLD is no longer needed and, therefore, can be excluded from the standard cell implementation. In step 115, hard intellectual property (IP) blocks utilized by the PLD circuit design can be identified. A hard IP block can refer to a portion of the PLD circuit design that utilizes specialized hardware within the target PLD, e.g., within the physical PLD. Examples of hard IP blocks can include, for example, digital signal processing blocks, block random access memories, processors, input/output (I/O) blocks, or the like. The hard IP blocks identified within the PLD circuit design are not decomposed as is the case with logic blocks composed of LUTs, flip-flops, and other varieties of programmable logic. Rather, hard IP blocks available in the target PLD that are utilized by the PLD circuit design can be incorporated within the standard cell circuit design directly or in modified form as will be described herein in further detail. In step 120, the logic gates specified by the gate level netlist can be mapped to standard cells of a standard cell library. The logic gates of the gate level netlist can be matched to standard cell versions of each respective logic gate selected from the standard cell library. For example, logical OR gates can be mapped to logical OR gate standard cells. More complex logic blocks can be mapped to more complex standard cells, e.g., a carry chain of the PLD to a carry chain of the standard cell library. It should be appreciated that when a one-to- one mapping for a given circuit structure of the PLD does not exist, that structure may be decomposed, or further decomposed as the case may be, into its constituent parts for purposes of mapping the individual parts to one or more available standard cells. For example, a carry chain may be mapped to a plurality of standard cell versions of the constituent logic gates that implement the carry chain within the PLD circuit design. In mapping the logic gate netlist to standard cells, flip-flop logic gates of the PLD circuit design can be converted to scan flip-flop standard cells within thestandard cell circuit design. As is known, a scan flip-flop can operate in a normal mode or a test mode. Typically, a scan flip-flop is implemented as a flip-flop that includes a multiplexer or other switching mechanism. When placed in test mode, the scan flip-flop can behave as a serial shift register. When connected with other scan flip-flops via a scan chain, a test mode can be realized where the scan flip-flops become a single, large shift register. In test mode, data can be clocked serially through all the scan flip-flops and out of an output pin at the same time as new data is clocked in from an input pin. In step 125, the standard cells can be included or inserted into the standard cell circuit design. That is, each standard cell to which a portion of the PLD circuit design has been mapped can be inserted or included within the standard cell circuit design. In step 130, the hard IP blocks used by the PLD circuit design can be included or instantiated within the standard cell circuit design. In one embodiment, the hard IP blocks of the PLD circuit design can be implemented as implemented within the PLD circuit design. That is, the hard IP blocks can be incorporated directly into the standard cell circuit design. In another embodiment, only those portions of a hard IP block that are used in the PLD circuit design are implemented within the standard cell circuit design. Portions of the hard IP block that implement functionality that is not utilized can be excluded. Any hard IP blocks available in the target PLD that are not used by the PLD circuit design can be excluded from the standard cell circuit design as well. In another embodiment, a wrapper can be included for one or more or each hard IP block. The wrapper is code, e.g., RTL level code, that implements testing circuitry that is not necessary within a PLD. For example, within a PLD, the programmable fabric of the PLD can be reconfigured to test the hard IP block. When implemented within a standard cell circuit design, however, that functionality is not built into the device. Accordingly, a wrapper can be added to one or more or each hard IP block that implements or instantiates testing circuitry for use with the hard IP block. It should be appreciated that the connectivity specified in the standard cell circuit design can be the same as, or derived from, the connectivity of the PLD circuit design. For example, communication between logic gates, hard IP blocks,and the like within the PLD circuit design can be unchanged with respect to the Standard cells, hard IP blocks, etc. within the standard cell circuit design. In step 135, design constraints for the standard cell circuit design can be generated according to explicitly defined design constraints for the PLD circuit design. Explicitly defined design constraints for the PLD circuit design can be translated into design constraints that are usable by a standard cell circuit design implementation flow. Explicit design constraints can be specified, for example, within a file having a UCF extension, which stands for Uniform Constraints File (UCF). A UCF file is an ASCII file specifying constraints on a PLD circuit design for an FPGA of the variety available from Xilinx, Inc. Explicit design constraints can include any of a variety of different types of constraints. One type of explicit design constraint can include a pin placement and/or packaging constraint. For example, a design constraint can be generated for the standard cell circuit design that states that the same packaging or package used by the target PLD is to be selected and used for the standard cell circuit design. I/O pins of the target PLD that are utilized by the PLD circuit design can be identified as well as any unused I/O pins. Design constraints can be created for the standard cell circuit design that ensure that the same I/O pin configuration, with respect to used I/O pins of the target PLD, will exist for the standard cell circuit design. That is, each physical I/O pin of the target PLD used by the PLD circuit design will have a corresponding physical I/O pin on the standard cell circuit design at a same relative location. The resulting standard cell circuit design will fit the same socket of a circuit board as the target PLD without any change to the routing or layout of the circuit board upon which the socket is mounted. In this sense, the pin configuration or pin placement of the PLD circuit design and the standard cell circuit design can be said to "match" or be equivalent. Another set of explicit design constraints that can be generated can include timing constraints. Explicit timing constraints can include, for example, I/O related timing constraints. I/O related timing constraints must be explicitly defined by a designer as timing information relating to a system external to the circuit design is not knowable from an analysis of the PLD circuit design itself. Any timing constraints that are non-l/O related, but explicitly specified by adesigner with respect to the PLD circuit design or specified for a standard cell circuit design, if available, can be translated into a format that is usable within an implementation flow for the standard cell circuit design. For example, any explicit constraints originally specified within a UCF can be translated into a format compatible with, and specified as, an SDC or other file type that is usable by EDA tools capable of performing all or a part of an implementation flow for a standard cell circuit design. In one embodiment, such explicit timing constraints, e.g., any user or designer specified timing constraint or requirement, can be specified as a minimum signal path constraint for the standard cell circuit design. In step 140, design constraints for the standard cell circuit design can be generated according to implicit design constraints for the PLD circuit design. Implicit design constraints refer to constraints that are not explicitly specified. Implicit design constraints are determined from a review of the implementation information of the PLD circuit design. For example, timing delays for any signal path within the PLD circuit design can be used to create timing constraints for the standard cell circuit design. Though no explicit timing constraint for a given signal path may exist in the UCF file, for example, the delay for that signal path as determined by an EDA tool can be used to create a timing constraint for the same or corresponding signal path when implemented within the standard cell circuit design. As noted, any design constraint generated for the standard cell circuit design can be specified in a format that is usable by an implementation flow for the standard cell circuit design. In one embodiment, regarding implicit timing constraints, the timing of each signal path within each clock domain can be extracted from the PLD circuit design. For example, each signal path starting from an I/O and continuing to a first register encountered on that signal path from the I/O can be identified, each signal path from an I/O to another I/O through any combinatorial logic can be identified, each signal path from a register to an I/O can be identified, and each signal path from a register to a register can be identified. The timing of such signal paths can be determined from the PLD circuit design. The delays determined from the PLD circuit design can be used as maximum signal path constraints for the same, e.g., corresponding, signal paths implemented within the standard cell circuit design. For example, the timing delay of a selected signal path of the PLD circuit design as determined using a PLD EDA tool can beused as the maximum time constraint for the same signal path when implemented or specified in the standard cell circuit design. Implicit timing constraints also can be generated within a selected clock domain and across-clock domains. For example, a hold time constraint for the standard cell circuit design can be determined and specified according to a signal path having a shortest delay between a timing start point and a timing end point within a selected clock domain of the PLD circuit design. That is, a signal path with a shortest delay for a selected clock domain within the PLD circuit design can be identified. The delay can be measured from a timing, e.g., a clocked, start point and a timing end point of the signal path so long as the timing start point and the timing end point are within the selected, e.g., same, clock domain. The delay of that signal path can be used as a hold time constraint for each other signal path within the clock domain of the standard cell circuit design that corresponds to, or is the same as, the selected clock domain of the PLD circuit design. Thus, if 10 clock domains exist, there can be 10 hold times, e.g., one hold time for each clock domain. With respect to cross-clock domains, a first clock domain and a second clock domain of the PLD circuit design can be selected. A signal path that begins at a timing start point in the first clock domain and continues to a timing end point within the second clock domain, and that has a shortest delay of each signal path crossing from the first clock domain to the second clock domain, can be selected from the PLD circuit design. The delay of the selected cross-clock domain signal path can be used as a hold time constraint for each signal path crossing from the first clock domain to the second clock domain as implemented within the standard cell circuit design. That is, the delay of the selected cross- clock domain signal path of the PLD can be used as the hold time constraint for each signal path crossing from the first clock domain to the second clock domain of the standard cell circuit design, where the first and second clock domains of the standard cell circuit design correspond to, or are the same as, the first and second clock domains of the PLD circuit design. It should be appreciated that the cross-clock domain hold time constraints are directional in nature. That is, if a signal flows from clock domain A into clock domain B, the hold time constraint for signals going from clock domain A to clock domain B will be the same for all such signals, but different for signals goingfrom clock domain B to clock domain A. A different hold time constraint can be determined for the group of signals going from clock domain B to clock domain A in the similar or same manner as described. Another example of an implicit timing constraint can relate to pin, ball, and/or package related timing constraints. Such timing characteristics should be the same as, or approximately the same as, those of the PLD circuit design. Since, for example, the resulting standard cell circuit design implementation typically has fewer I/O pins than the PLD circuit design implementation, timing can be influenced by effects such as Simultaneous Switching of Outputs (SSO). Since pins of the target PLD that are not used by the PLD circuit design will not be included in the resulting ASIC implementation, the lack of pins can influence timing of the ASIC implementation. For example, certain circuit properties, e.g., capacitance and the like, may vary from the quantities existing in the PLD circuit design implementation to the ASIC as a result of the reduced number of pins. Such changes must be accounted for with regard to placement and/or routing. In step 145, design constraints generated for the standard cell circuit design can be output. As used herein, "outputting" and/or "output" can mean, for example, writing to a file, storing in memory, whether persistent memory or not, writing to a user display or other output device, playing audible notifications, sending or transmitting to another system, exporting, or the like. In step 150, the standard cell circuit design can be placed. The standard cell circuit design can be placed using the design constraints specified in step 145. In step 155, the standard cell circuit design can be routed. The standard cell circuit design can be routed using the design constraints output in step 145. Both placement and routing can be performed so that all design constraints of the standard cell circuit design are met. During routing, scan chains can be implemented for the standard cell circuit design. As is known, a scan chain refers to a design for test technique in which scan flip-flops can be linked in a way to form a serial shift register when placed in a testing mode. In step 160, the standard cell circuit design can be output. The resulting standard cell circuit design can be verified according to the design constraints determined or extracted from the PLD circuit design. The embodiments disclosed herein provide an automated technique for creating a standard cell circuit design from a PLD circuit design. The embodimentsdisclosed herein facilitate increased area reduction and eliminate the need for designers or customers to provide RTL files for conversion into a standard cell circuit design. The flowchart in the figure illustrates the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart may represent a module, segment, or portion of code, which comprises one or more portions of computer-usable program code that implements the specified logical function(s). It is noted that, in some alternative implementations, the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It also should be noted that each block of the flowchart illustrations, and combinations of blocks in the flowchart illustrations, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. Embodiments of the present invention can be realized in hardware, software, or a combination of hardware and software. The embodiments can be realized in a centralized fashion in one data processing system or in a distributed fashion where different elements are spread across several interconnected data processing system. Any kind of data processing system or other apparatus adapted for carrying out the methods described herein is suited. A typical combination of hardware and software can be a general-purpose computer system with a computer program that, when being loaded and executed, controls the computer system such that it carries out the methods described herein. A data processing system, e.g., a computer or computer system, suitable for storing and/or executing program code will include at least one processor coupled directly or indirectly to memory elements through a system bus. The memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution.I/O devices (including but not limited to keyboards, displays, pointing devices, etc.) can be coupled to the system either directly or through intervening I/O controllers. Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modems, and Ethernet cards are just a few of the currently available types of network adapters. Embodiments of the present invention further can be embedded in a computer program product, which comprises all the features enabling the implementation of the methods described herein. The computer program product can include a computer-usable or computer-readable medium having computer-usable program code which, when loaded in a computer system, causes the computer system to perform the functions described herein. Examples of computer-usable or computer-readable media can include, but are not limited to, optical media, magnetic media, computer memory, one or more portions of a wired or wireless network through which computer-usable program code can be propagated, or the like. The terms "computer program," "software," "application," "computer- usable program code," variants and/or combinations thereof, in the present context, mean any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following: a) conversion to another language, code or notation; b) reproduction in a different material form. For example, a computer program can include, but is not limited to, a subroutine, a function, a procedure, an object method, an object implementation, an executable application, an applet, a servlet, a source code, an object code, a shared library/dynamic load library and/or other sequence of instructions designed for execution on a computer system. The terms "a" and "an," as used herein, are defined as one or more than one. The term "plurality," as used herein, is defined as two or more than two. The term "another," as used herein, is defined as at least a second or more. The terms "including" and/or "having," as used herein, are defined as comprising, i.e., open language. The term "coupled," as used herein, is defined as connected,although not necessarily directly, and not necessarily mechanically, e.g., communicatively linked through a communication channel or pathway or another component or system. The embodiments disclosed herein can be embodied in other forms without departing from the spirit or essential attributes thereof. Accordingly, reference should be made to the following claims, rather than to the foregoing specification, as indicating the scope of the various embodiments of the present invention. |
A method performs XNOR-equivalent operations by adjusting column thresholds of a compute-in-memory array of an artificial neural network. The method includes adjusting an activation threshold generated for each column of the compute-in-memory array based on a function of a weight value and an activation value. The method also includes calculating a conversion bias current reference based on an input value from an input vector to the compute-in-memory array, the compute-in-memory array being programmed with a set of weights. The adjusted activation threshold and the conversion bias current reference are used as a threshold for determining the output values of the compute-in-memory array. |
CLAIMS WHAT IS CLAIMED IS: 1. An apparatus comprising: a compute-in-memory array comprising rows and columns, the compute-in- memory array configured: to adjust an activation threshold generated for each column of the compute-in-memory array based on a function of a weight value and an activation value; and to calculate a conversion bias current reference based on an input value from an input vector to the compute-in-memory array, the compute-in-memory array programmed with a set of weight values, in which the adjusted activation threshold and the conversion bias current reference are used as a threshold for determining output values of the compute-in-memory array. 2. The apparatus of claim 1, further comprising a comparator configured to compare a bit line population count to a sum of the conversion bias current reference and the adjusted activation threshold in order to determine an output of a bit line. 3. The apparatus of claim 1, in which an artificial neural network including the compute-in-memory array comprises a binary neural network. 4. The apparatus of claim 1, in which the activation threshold is less than half of a number of rows of the compute-in-memory array, the number of rows corresponding to a size of the input vector. 5. The apparatus of claim 1, in which the conversion bias current reference is less than half of a number of rows of the compute-in-memory array, the number of rows corresponding to a size of the input vector.6. A method comprising: adjusting an activation threshold generated for each column of the compute-in- memory array having rows and columns based on a function of a weight value and an activation value; calculating a conversion bias current reference based on an input value from an input vector to the compute-in-memory array, the compute-in-memory array being programmed with a set of weight values, in which the adjusted activation threshold and the conversion bias current reference are used as a threshold for determining output values of the compute-in-memory array. 7. The method of claim 6, further comprising comparing a bit line population count to a sum of the conversion bias current reference and the adjusted activation threshold in order to determine an output of a bit line. 8. The method of claim 6, in which an artificial neural network including the compute-in-memory array comprises a binary neural network. 9. The method of claim 6, in which the activation threshold is less than half of a number of rows of the compute-in-memory array, the number of rows corresponding to a size of the input vector. 10. The method of claim 6, in which the conversion bias current reference is less than half of a number of rows of the compute-in-memory array, the number of rows corresponding to a size of the input vector. 11. A non-transitory computer-readable medium having non-transitory program code recorded thereon, the program code comprising: program code to adjust an activation threshold generated for each column of a compute-in-memory array having rows and columns based on a function of a weight value and an activation value; and program code to calculate a conversion bias current reference based on an input value from an input vector to the compute-in-memory array, the compute-in-memory array being programmed with a set of weight values, in which the adjusted activation
threshold and the conversion bias current reference are used as a threshold for determining output values of the compute-in-memory array. 12. The non-transitory computer-readable medium of claim 11, further comprising program code to compare a bit line population count to a sum of the conversion bias current reference and the adjusted activation threshold in order to determine an output of a bit line. 13. The non-transitory computer-readable medium of claim 11, in which an artificial neural network subject to the adjusting and the calculating comprises a binary neural network. 14. The non-transitory computer-readable medium of claim 11, in which the activation threshold is less than half of a number of rows of the compute-in-memory array, the number of rows corresponding to a size of the input vector. 15. The non-transitory computer-readable medium of claim 11, in which the conversion bias current reference is less than half of a number of rows of the compute- in-memory array, the number of rows corresponding to a size of the input vector. 16. An apparatus comprising: means for adjusting an activation threshold generated for each column of the compute-in-memory array having rows and columns based on a function of a weight value and an activation value; and means for calculating a conversion bias current reference based on an input value from an input vector to the compute-in-memory array, the compute-in-memory array being programmed with a set of weight values, in which the adjusted activation threshold and the conversion bias current reference are used as a threshold for determining output values of the compute-in-memory array. 17. The apparatus of claim 16, further comprising means for comparing a bit line population count to a sum of the conversion bias current reference and the adjusted activation threshold in order to determine an output of a bit line.18. The apparatus of claim 16, in which an artificial neural network including the compute-in-memory array comprises a binary neural network. 19. The apparatus of claim 16, in which the activation threshold is less than half of a number of rows of the compute-in-memory array, the number of rows corresponding to a size of the input vector. 20. The apparatus of claim 16, in which the conversion bias current reference is less than half of a number of rows of the compute-in-memory array, the number of rows corresponding to a size of the input vector. |
PERFORMING XNOR EQUIVALENT OPERATIONS BY ADJUSTING COLUMN THRESHOLDS OF A COMPUTE-IN-MEMORY ARRAY Claim of priority under 35 U.S.C. §119 [0001] The present Application for Patent claims priority to Non-provisional Application No. 16/565,308 entitled “PERFORMING XNOR EQUIVALENT OPERATIONS BY ADJUSTING COLUMN THRESHOLDS OF A COMPUTE-IN- MEMORY ARRAY” filed September 9, 2019, assigned to the assignee hereof and hereby expressly incorporated by reference herein. Field [0002] Aspects of the present disclosure generally relate to performing XNOR- equivalent operations by adjusting column thresholds of a compute-in-memory array of an artificial neural network. Background [0003] Very low bit width neural networks, such as binary neural networks (BNNs), are powerful new approaches in deep neural networking (DNN). Binary neural networks can significantly reduce data traffic and save power. For example, the memory storage for binary neural networks is significantly reduced because both the weights and neuron activations are binarized to -1 or + 1, as compared to floating/fixed- point precision. [0004] Digital complementary metal–oxide–semiconductor (CMOS) processing, however, uses a [0,1] basis. In order to carry out binary implementations associated with these binary neural networks, the binary network’s [-1,+1] basis should be transformed to the CMOS [0,1] basis. The transformation employs a computationally intense exclusive-negative OR (XNOR) operation. [0005] Compute-in-memory systems can implement very low bit width neural networks, such as binary neural networks (BNNs). Compute-in-memory systems have memory with some processing capabilities. For example, each intersection of a bit line
and a word line represents a filter weight value, which is multiplied by the input activation on the word line to generate a product. The individual products along each bit line are then summed to generate corresponding output values of an output tensor. This implementation may be deemed multiply accumulate (MAC) operations. These MAC operations can transform the binary network’s [-1,+1] basis to the CMOS [0,1] basis. [0006] Conventionally, the transformation with a compute-in-memory system is achieved by completing an XNOR operation at each bit cell. The result along each bit line are then summed to generate corresponding output values. Unfortunately, including an XNOR function in each bit cell consumes a large area and increases power consumption. [0007] In the conventional implementation, each bit cell includes a basic memory function of read and write plus an additional logic function of XNOR between the input and cell state. As a result of including the XNOR capability, the number of transistors for each cell in the memory (e.g., static random-access memory (SRAM)) increases from six or eight to twelve, which significantly increases cell size and power consumption. It would be desirable to eliminate the XNOR operation while still being able to transform from a binary neural network[-1,+1] basis to a CMOS [0,1] basis. SUMMARY [0008] In one aspect of the present disclosure, an apparatus includes a compute-in- memory array that includes columns and rows. The compute-in-memory array is configured to adjust an activation threshold generated for each column of the compute- in-memory array based on a function of a weight value and an activation value. The compute-in-memory array is also configured to calculate a conversion bias current reference based on an input value from an input vector to the compute-in-memory array. The compute-in-memory array is programmed with a set of weight values. The adjusted activation threshold and the conversion bias current reference are used as a threshold for determining the output values of the compute-in-memory array
[0009] Another aspect discloses a method for performing XNOR-equivalent operations by adjusting column thresholds of a compute-in-memory array having rows and columns. The method includes adjusting an activation threshold generated for each column of the compute-in-memory array based on a function of a weight value and an activation value. The method also includes calculating a conversion bias current reference based on an input value from an input vector to the compute-in-memory array. The compute-in-memory array is programmed with a set of weight values. The adjusted activation threshold and the conversion bias current reference are used as a threshold for determining the output values of the compute-in-memory array. [0010] In another aspect, a non-transitory computer-readable medium records non- transitory program code. The non-transitory program code which, when executed by a processor(s), causes the processor(s) to adjust an activation threshold generated for each column of the compute-in-memory array having rows and columns based on a function of a weight value and an activation value. The program code also causes the processor(s) to calculate a conversion bias current reference based on an input value from an input vector to the compute-in-memory array. The compute-in-memory array is programmed with a set of weight values. The adjusted activation threshold and the conversion bias current reference are used as a threshold for determining the output values of the compute-in-memory array. [0011] Another aspect discloses an apparatus for performing XNOR-equivalent operations by adjusting column thresholds of a compute-in-memory array having rows and columns. The apparatus includes means for adjusting an activation threshold generated for each column of the compute-in-memory array based on a function of a weight value and an activation value. The apparatus also includes means for calculating a conversion bias current reference based on an input value from an input vector to the compute-in-memory array. The compute-in-memory array is programmed with a set of weight values. The adjusted activation threshold and the conversion bias current reference are used as a threshold for determining the output values of the compute-in- memory array.
[0012] This has outlined, rather broadly, the features and technical advantages of the present disclosure in order that the detailed description that follows may be better understood. Additional features and advantages of the present disclosure will be described below. It should be appreciated by those skilled in the art that this present disclosure may be readily utilized as a basis for modifying or designing other structures for carrying out the same purposes of the present disclosure. It should also be realized by those skilled in the art that such equivalent constructions do not depart from the teachings of the present disclosure as set forth in the appended claims. The novel features, which are believed to be characteristic of the present disclosure, both as to its organization and method of operation, together with further objects and advantages, will be better understood from the following description when considered in connection with the accompanying figures. It is to be expressly understood, however, that each of the figures is provided for the purpose of illustration and description only and is not intended as a definition of the limits of the present disclosure. BRIEF DESCRIPTION OF THE DRAWINGS [0013] The features, nature, and advantages of the present disclosure will become more apparent from the detailed description set forth below when taken in conjunction with the drawings in which like reference characters identify correspondingly throughout. [0014] FIGURE 1 illustrates an example implementation of designing a neural network using a system-on-a-chip (SOC), including a general-purpose processor in accordance with certain aspects of the present disclosure. [0015] FIGURES 2A, 2B, and 2C are diagrams illustrating a neural network in accordance with aspects of the present disclosure. [0016] FIGURE 2D is a diagram illustrating an exemplary deep convolutional network (DCN) in accordance with aspects of the present disclosure. [0017] FIGURE 3 is a block diagram illustrating an exemplary deep convolutional network (DCN) in accordance with aspects of the present disclosure.
[0018] FIGURE 4 illustrates an architecture showing a compute-in-memory (CIM) array of an artificial neural network, according to aspects of the present disclosure. [0019] FIGURE 5 illustrates an architecture for performing XNOR-equivalent operations by adjusting column thresholds of a compute-in-memory array of an artificial neural network, according to aspects of the present disclosure. [0020] FIGURE 6 illustrates a method for performing XNOR-equivalent operations by adjusting column thresholds of a compute-in-memory array of an artificial neural network, in accordance with aspects of the present disclosure. DETAILED DESCRIPTION [0021] The detailed description set forth below, in connection with the appended drawings, is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described herein may be practiced. The detailed description includes specific details for the purpose of providing a thorough understanding of the various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. In some instances, well-known structures and components are shown in block diagram form in order to avoid obscuring such concepts. [0022] Based on the teachings, one skilled in the art should appreciate that the scope of the disclosure is intended to cover any aspect of the disclosure, whether implemented independently of or combined with any other aspect of the disclosure. For example, an apparatus may be implemented or a method may be practiced using any number of the aspects set forth. In addition, the scope of the disclosure is intended to cover such an apparatus or method practiced using other structure, functionality, or structure and functionality in addition to or other than the various aspects of the disclosure set forth. It should be understood that any aspect of the disclosure disclosed may be embodied by one or more elements of a claim. [0023] The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any aspect described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects.
[0024] Although particular aspects are described herein, many variations and permutations of these aspects fall within the scope of the disclosure. Although some benefits and advantages of the preferred aspects are mentioned, the scope of the disclosure is not intended to be limited to particular benefits, uses or objectives. Rather, aspects of the disclosure are intended to be broadly applicable to different technologies, system configurations, networks and protocols, some of which are illustrated by way of example in the figures and in the following description of the preferred aspects. The detailed description and drawings are merely illustrative of the disclosure rather than limiting, the scope of the disclosure being defined by the appended claims and equivalents thereof. [0025] Compute-in-memory (CIM) is a method of performing a multiply and accumulate (MAC) operation in a memory array. Compute-in-memory may improve parallelism within a memory array by activating multiple rows and using an analog column current to conduct multiplication and summation operations. For example, SRAM bit cells may be customized to enable XNOR and bit-counting operations for binary neural networks. [0026] Conventionally, compute-in-memory binary neural network implementations are achieved by completing XNOR operations at each bit cell and summing the result for each bit cell. Adding an XNOR function in each bit cell increases layout area and increases power consumption. For example, the number of transistors in each cell in the memory (e.g., SRAM) increases from six or eight to twelve. [0027] Aspects of the present disclosure are directed to performing XNOR- equivalent operations by adjusting column thresholds of a compute-in-memory array of an artificial neural network (e.g., a binary neural network). In one aspect, an activation threshold for each column of the memory array is adjusted based on a function of a weight value and an activation value. A conversion bias current reference is calculated based on an input value from an input vector. [0028] In one aspect, a bit line population count is compared to a sum of the conversion bias current reference and the adjusted activation threshold to determine an output of a bit line. The bit line population count is a sum of each output of the bitcells
corresponding to a bit line of the memory array. For example, the sum of the output (or population count) of each bitcell associated with a first bit line is provided to the comparator as a first input. The population count is then compared to the sum of the conversion bias current reference and the adjusted activation threshold to determine the output of the bit line. In some aspects, the activation threshold is less than half of a number of rows of the memory array. The number of rows corresponds to a size of the input vector. In some aspects, the conversion bias current reference is less than half of a number of rows of the memory array. [0029] The artificial neural network of the present disclosure may be a binary neural network, a multi-bit neural network, or a very low bit-width neural network. Aspects of the present disclosure may be applicable to devices (e.g., edge devices) that specify very low memory processing and power or large networks that could benefit from memory savings resulting from a binary format. Aspects of the present disclosure reduce size and improve power consumption of the memory by eliminating XNOR operations in compute-in-memory systems that implement binary neural networks. For example, the basis transformation is implemented to avoid use of the XNOR function and its corresponding transistor(s) in each bit cell, thereby reducing the size of the memory. [0030] FIGURE 1 illustrates an example implementation of a system-on-a-chip (SOC) 100, which may include a central processing unit (CPU) 102 or a multi-core CPU configured for transforming multiply and accumulate operations for a compute-in- memory (CIM) array of an artificial neural network in accordance with certain aspects of the present disclosure. Variables (e.g., neural signals and synaptic weights), system parameters associated with a computational device (e.g., neural network with weights), delays, frequency bin information, and task information may be stored in a memory block associated with a neural processing unit (NPU) 108, in a memory block associated with a CPU 102, in a memory block associated with a graphics processing unit (GPU) 104, in a memory block associated with a digital signal processor (DSP) 106, in a memory block 118, or may be distributed across multiple blocks. Instructions executed at the CPU 102 may be loaded from a program memory associated with the CPU 102 or may be loaded from a memory block 118.
[0031] The SOC 100 may also include additional processing blocks tailored to specific functions, such as a GPU 104, a DSP 106, a connectivity block 110, which may include fifth generation (5G) connectivity, fourth generation long term evolution (4G LTE) connectivity, Wi-Fi connectivity, USB connectivity, Bluetooth connectivity, and the like, and a multimedia processor 112 that may, for example, detect and recognize gestures. In one implementation, the NPU is implemented in the CPU, DSP, and/or GPU. The SOC 100 may also include a sensor processor 114, image signal processors (ISPs) 116, and/or navigation module 120, which may include a global positioning system. [0032] The SOC 100 may be based on an ARM instruction set. In an aspect of the present disclosure, the instructions loaded into the general-purpose processor 102 may comprise code to adjust an activation threshold for each column of the array based on a function of a weight value (e.g., a weight matrix) and an activation value. The general- purpose processor 102 may further comprise code to calculate a conversion bias current reference based on an input value from an input vector. [0033] Deep learning architectures may perform an object recognition task by learning to represent inputs at successively higher levels of abstraction in each layer, thereby building up a useful feature representation of the input data. In this way, deep learning addresses a major bottleneck of traditional machine learning. Prior to the advent of deep learning, a machine learning approach to an object recognition problem may have relied heavily on human engineered features, perhaps in combination with a shallow classifier. A shallow classifier may be a two-class linear classifier, for example, in which a weighted sum of the feature vector components may be compared with a threshold to predict to which class the input belongs. Human engineered features may be templates or kernels tailored to a specific problem domain by engineers with domain expertise. Deep learning architectures, in contrast, may learn to represent features that are similar to what a human engineer might design, but through training. Furthermore, a deep network may learn to represent and recognize new types of features that a human might not have considered.
[0034] A deep learning architecture may learn a hierarchy of features. If presented with visual data, for example, the first layer may learn to recognize relatively simple features, such as edges, in the input stream. In another example, if presented with auditory data, the first layer may learn to recognize spectral power in specific frequencies. The second layer, taking the output of the first layer as input, may learn to recognize combinations of features, such as simple shapes for visual data or combinations of sounds for auditory data. For instance, higher layers may learn to represent complex shapes in visual data or words in auditory data. Still higher layers may learn to recognize common visual objects or spoken phrases. [0035] Deep learning architectures may perform especially well when applied to problems that have a natural hierarchical structure. For example, the classification of motorized vehicles may benefit from first learning to recognize wheels, windshields, and other features. These features may be combined at higher layers in different ways to recognize cars, trucks, and airplanes. [0036] Neural networks may be designed with a variety of connectivity patterns. In feed-forward networks, information is passed from lower to higher layers, with each neuron in a given layer communicating to neurons in higher layers. A hierarchical representation may be built up in successive layers of a feed-forward network, as described above. Neural networks may also have recurrent or feedback (also called top- down) connections. In a recurrent connection, the output from a neuron in a given layer may be communicated to another neuron in the same layer. A recurrent architecture may be helpful in recognizing patterns that span more than one of the input data chunks that are delivered to the neural network in a sequence. A connection from a neuron in a given layer to a neuron in a lower layer is called a feedback (or top-down) connection. A network with many feedback connections may be helpful when the recognition of a high-level concept may aid in discriminating the particular low-level features of an input. [0037] The connections between layers of a neural network may be fully connected or locally connected. FIGURE 2A illustrates an example of a fully connected neural network 202. In a fully connected neural network 202, a neuron in a first layer may
communicate its output to every neuron in a second layer, so that each neuron in the second layer will receive input from every neuron in the first layer. FIGURE 2B illustrates an example of a locally connected neural network 204. In a locally connected neural network 204, a neuron in a first layer may be connected to a limited number of neurons in the second layer. More generally, a locally connected layer of the locally connected neural network 204 may be configured so that each neuron in a layer will have the same or a similar connectivity pattern, but with connections strengths that may have different values (e.g., 210, 212, 214, and 216). The locally connected connectivity pattern may give rise to spatially distinct receptive fields in a higher layer, because the higher layer neurons in a given region may receive inputs that are tuned through training to the properties of a restricted portion of the total input to the network. [0038] One example of a locally connected neural network is a convolutional neural network. FIGURE 2C illustrates an example of a convolutional neural network 206. The convolutional neural network 206 may be configured such that the connection strengths associated with the inputs for each neuron in the second layer are shared (e.g., 208). Convolutional neural networks may be well suited to problems in which the spatial location of inputs is meaningful. [0039] One type of convolutional neural network is a deep convolutional network (DCN). FIGURE 2D illustrates a detailed example of a DCN 200 designed to recognize visual features from an image 226 input from an image capturing device 230, such as a car-mounted camera. The DCN 200 of the current example may be trained to identify traffic signs and a number provided on the traffic sign. Of course, the DCN 200 may be trained for other tasks, such as identifying lane markings or identifying traffic lights. [0040] The DCN 200 may be trained with supervised learning. During training, the DCN 200 may be presented with an image, such as the image 226 of a speed limit sign, and a forward pass may then be computed to produce an output 222. The DCN 200 may include a feature extraction section and a classification section. Upon receiving the image 226, a convolutional layer 232 may apply convolutional kernels (not shown) to the image 226 to generate a first set of feature maps 218. As an example, the convolutional kernel for the convolutional layer 232 may be a 5x5 kernel that generates
28x28 feature maps. In the present example, because four different feature maps are generated in the first set of feature maps 218, four different convolutional kernels were applied to the image 226 at the convolutional layer 232. The convolutional kernels may also be referred to as filters or convolutional filters. [0041] The first set of feature maps 218 may be subsampled by a max pooling layer (not shown) to generate a second set of feature maps 220. The max pooling layer reduces the size of the first set of feature maps 218. That is, a size of the second set of feature maps 220, such as 14x14, is less than the size of the first set of feature maps 218, such as 28x28. The reduced size provides similar information to a subsequent layer while reducing memory consumption. The second set of feature maps 220 may be further convolved via one or more subsequent convolutional layers (not shown) to generate one or more subsequent sets of feature maps (not shown). [0042] In the example of FIGURE 2D, the second set of feature maps 220 is convolved to generate a first feature vector 224. Furthermore, the first feature vector 224 is further convolved to generate a second feature vector 228. Each feature of the second feature vector 228 may include a number that corresponds to a possible feature of the image 226, such as “sign,” “60,” and “100.” A softmax function (not shown) may convert the numbers in the second feature vector 228 to a probability. As such, an output 222 of the DCN 200 is a probability of the image 226 including one or more features. [0043] In the present example, the probabilities in the output 222 for “sign” and “60” are higher than the probabilities of the others of the output 222, such as “30,” “40,” “50,” “70,” “80,” “90,” and “100”. Before training, the output 222 produced by the DCN 200 is likely to be incorrect. Thus, an error may be calculated between the output 222 and a target output. The target output is the ground truth of the image 226 (e.g., “sign” and “60”). The weights of the DCN 200 may then be adjusted so the output 222 of the DCN 200 is more closely aligned with the target output. [0044] To adjust the weights, a learning algorithm may compute a gradient vector for the weights. The gradient may indicate an amount that an error would increase or decrease if the weight were adjusted. At the top layer, the gradient may correspond
directly to the value of a weight connecting an activated neuron in the penultimate layer and a neuron in the output layer. In lower layers, the gradient may depend on the value of the weights and on the computed error gradients of the higher layers. The weights may then be adjusted to reduce the error. This manner of adjusting the weights may be referred to as “back propagation” as it involves a “backward pass” through the neural network. [0045] In practice, the error gradient of weights may be calculated over a small number of examples, so that the calculated gradient approximates the true error gradient. This approximation method may be referred to as stochastic gradient descent. Stochastic gradient descent may be repeated until the achievable error rate of the entire system has stopped decreasing or until the error rate has reached a target level. After learning, the DCN may be presented with new images (e.g., the speed limit sign of the image 226) and a forward pass through the network may yield an output 222 that may be considered an inference or a prediction of the DCN. [0046] Deep belief networks (DBNs) are probabilistic models comprising multiple layers of hidden nodes. DBNs may be used to extract a hierarchical representation of training data sets. A DBN may be obtained by stacking up layers of Restricted Boltzmann Machines (RBMs). An RBM is a type of artificial neural network that can learn a probability distribution over a set of inputs. Because RBMs can learn a probability distribution in the absence of information about the class to which each input should be categorized, RBMs are often used in unsupervised learning. Using a hybrid unsupervised and supervised paradigm, the bottom RBMs of a DBN may be trained in an unsupervised manner and may serve as feature extractors, and the top RBM may be trained in a supervised manner (on a joint distribution of inputs from the previous layer and target classes) and may serve as a classifier. [0047] Deep convolutional networks (DCNs) are networks of convolutional networks, configured with additional pooling and normalization layers. DCNs have achieved state-of-the-art performance on many tasks. DCNs can be trained using supervised learning in which both the input and output targets are known for many
exemplars and are used to modify the weights of the network by use of gradient descent methods. [0048] DCNs may be feed-forward networks. In addition, as described above, the connections from a neuron in a first layer of a DCN to a group of neurons in the next higher layer are shared across the neurons in the first layer. The feed-forward and shared connections of DCNs may be exploited for fast processing. The computational burden of a DCN may be much less, for example, than that of a similarly sized neural network that comprises recurrent or feedback connections. [0049] The processing of each layer of a convolutional network may be considered a spatially invariant template or basis projection. If the input is first decomposed into multiple channels, such as the red, green, and blue channels of a color image, then the convolutional network trained on that input may be considered three-dimensional, with two spatial dimensions along the axes of the image and a third dimension capturing color information. The outputs of the convolutional connections may be considered to form a feature map in the subsequent layer, with each element of the feature map (e.g., 220) receiving input from a range of neurons in the previous layer (e.g., feature maps 218) and from each of the multiple channels. The values in the feature map may be further processed with a non-linearity, such as a rectification, max(0, x). Values from adjacent neurons may be further pooled, which corresponds to down sampling, and may provide additional local invariance and dimensionality reduction. Normalization, which corresponds to whitening, may also be applied through lateral inhibition between neurons in the feature map. [0050] The performance of deep learning architectures may increase as more labeled data points become available or as computational power increases. Modern deep neural networks are routinely trained with computing resources that are thousands of times greater than what was available to a typical researcher just fifteen years ago. New architectures and training paradigms may further boost the performance of deep learning. Rectified linear units may reduce a training issue known as vanishing gradients. New training techniques may reduce over-fitting and thus enable larger
models to achieve better generalization. Encapsulation techniques may abstract data in a given receptive field and further boost overall performance. [0051] FIGURE 3 is a block diagram illustrating a deep convolutional network 350. The deep convolutional network 350 may include multiple different types of layers based on connectivity and weight sharing. As shown in FIGURE 3, the deep convolutional network 350 includes the convolution blocks 354A, 354B. Each of the convolution blocks 354A, 354B may be configured with a convolution layer (CONV) 356, a normalization layer (LNorm) 358, and a max pooling layer (MAX POOL) 360. [0052] The convolution layers 356 may include one or more convolutional filters, which may be applied to the input data to generate a feature map. Although only two of the convolution blocks 354A, 354B are shown, the present disclosure is not so limiting, and instead, any number of the convolution blocks 354A, 354B may be included in the deep convolutional network 350 according to design preference. The normalization layer 358 may normalize the output of the convolution filters. For example, the normalization layer 358 may provide whitening or lateral inhibition. The max pooling layer 360 may provide down sampling aggregation over space for local invariance and dimensionality reduction. [0053] The parallel filter banks, for example, of a deep convolutional network may be loaded on a CPU 102 or GPU 104 of an SOC 100 to achieve high performance and low power consumption. In alternative embodiments, the parallel filter banks may be loaded on the DSP 106 or an ISP 116 of an SOC 100. In addition, the deep convolutional network 350 may access other processing blocks that may be present on the SOC 100, such as sensor processor 114 and navigation module 120, dedicated, respectively, to sensors and navigation. [0054] The deep convolutional network 350 may also include one or more fully connected layers 362 (FC1 and FC2). The deep convolutional network 350 may further include a logistic regression (LR) layer 364. Between each layer 356, 358, 360, 362, 364 of the deep convolutional network 350 are weights (not shown) that are to be updated. The output of each of the layers (e.g., 356, 358, 360, 362, 364) may serve as an input of a succeeding one of the layers (e.g., 356, 358, 360, 362, 364) in the deep
convolutional network 350 to learn hierarchical feature representations from input data 352 (e.g., images, audio, video, sensor data and/or other input data) supplied at the first of the convolution blocks 354A. The output of the deep convolutional network 350 is a classification score 366 for the input data 352. The classification score 366 may be a set of probabilities, where each probability is the probability of the input data including a feature from a set of features. [0055] The memory storage of artificial neural networks (e.g., binary neural networks) can be significantly reduced when the weights and neuron activations are binarized to -1 or + 1 ([-1,+1] space). However, digital complementary metal-oxide- semiconductor (CMOS) logic works in the [0,1] space. Thus, a transformation occurs between digital CMOS devices, which use a [0,1] basis and binary neural networks, which use a [-1, +1] basis during binary implementations. [0056] A memory cell may be configured to support an exclusive-negative OR (XNOR) function. For example, TABLES 1-3 (e.g., truth tables) illustrate mappings of the binary neural network in the [0,1] space to binary multiplication in the binarized [-1, +1] space. A two-input logical function is illustrated in the truth tables 1-3. [0057] TABLE 1 illustrates an example of the binary multiplication in the binarized [-1, +1] space. For example, multiplication in the binarized [-1, +1] space produces a 4-bit output of -1*-1,-1*+1,+1*-1,+1*+1 (e.g., “1,-1,-1,1”).TABLE 1 [0058] TABLE 2 illustrates an example of an XNOR implementation. The memory cell may be configured to perform an XNOR function on a first input value (e.g., binary neuron activation) and a second input value (e.g., a binary synaptic weight) to generate a
binary output. For example, the XNOR function is only true when all of the input values are true or when all of the input values are false. If some of the inputs are true and others are false, then the output of the XNOR function is false. Thus, when both inputs (e.g., first and second inputs) are false (e.g., the first input is 0 and the second input is 0), as shown in Table 2, the output is true (e.g., 1). When the first input is false (0) and the second input is true (1), the output is false (0). When the first input is true (1) and the second input is false (0), the output is false (0). When the first input is true (1) and the second input is true (1), the output is true (1). Thus, the truth table for the XNOR function with two inputs has a binary output of “1,0,0,1.” [0059] Thus, the binary multiplication in the binarized [-1, + 1] space maps to the binary output of the XNOR in the [0,1] space. For example, the “1s” in the 4-bit output of the binarized [-1, +1] space maps to the “1s” in the binary output of the XNOR function and the “-1s” in the 4-bit output of the binarized [-1, +1] space maps to the “0s” in the binary output of the XNOR function.TABLE 2TABLE 3
[0060] In contrast, the binary multiplication in the binarized [-1, + 1] space does not map to the binary multiplication in a binarized [0, 1] space shown in TABLE 3. For example, TABLE 3 illustrates multiplication in the binarized [0, 1] space to produce a 4-bit output of “0,0,0,1,” which does not map with the 4-bit output of “1,-1,-1,1” in the binarized [-1, +1] space. For example, the 4-bit output of “0,0,0,1,” includes only one true bit (e.g., the last bit) while the 4-bit output of “1,-1,-1,1” includes two true bits (e.g., the first bit and the last bit). [0061] Conventionally, binary neural networks implemented with compute-in- memory systems are realized by computing XNOR at each bit cell and summing the results along each bit line to generate output values. However, adding an XNOR function in each bit cell is expensive. For example, the number of transistors for each cell in the memory (e.g., SRAM) increases from six or eight to twelve, which significantly increases cell size and power consumption. [0062] Aspects of the present disclosure are directed to reducing size and improving power consumption of the memory by eliminating XNOR in a binary neural network compute-in-memory array. In one aspect, an activation threshold for each column of the compute-in-memory array is adjusted to avoid the use of the XNOR function and its corresponding transistor(s) in each bit cell. For example, a smaller memory (e.g., eight transistor SRAM) with smaller memory bit cells can be used for compute-in-memory binary neural networks. [0063] FIGURE 4 illustrates an exemplary architecture 400 for a compute-in- memory (CIM) array of an artificial neural network, according to aspects of the present disclosure. Compute-in-memory is a way of performing multiply and accumulate operations in a memory array. The memory array includes word lines 404 (or WL1), 405 (or WL2) … 406 (or WLM) as well as bit lines 401 (or BL1), 402 (or BL2) … 403 (or BLN). Weights (e.g., a binary synaptic weight values) are stored in the bitcells of the memory array. The input activations (e.g., input value that may be an input vector) are on the word lines. The multiplication happens at each bitcell and the results of the multiplication are output through the bit lines. For example, the multiplication includes multiplying the weights with the input activations at each bitcell. A summing device
(not shown), such as a voltage/current summing device, associated with the bit lines or columns (e.g., 401) sums the output (e.g., charge, current or voltage) of the bit lines and passes the result (e.g., output) to an analog-to-digital converter (ADC). For example, a sum of each bit line is calculated from the respective outputs of the bitcells of each bit line. [0064] In one aspect of the disclosure, an activation threshold adjustment is made at each column (corresponding to the bit lines) instead of at each bit cell to improve area efficiency. [0065] Conventionally, starting with a binary neural network implementation where the weights and neuron activations are binarized to -1 or + 1 ([-1,+1] space), the multiply and accumulate operations becomes an XNOR operation (in the bit cell) with an XNOR outcome and a population count of the XNOR outcomes. For example, the population count of a bit line includes the sum of the positive (e.g., “1”) outcomes of each bit cell of the bit line. The ADC functions like a comparator by using the activation threshold. For example, the population count is compared to the activation threshold or criteria by the ADC. If the population count is greater than the activation threshold, then the output of the bit line is a “1.” Otherwise, if the population count is less than or equal to the activation threshold, then the output of the bit line is a “0.” However, it is desirable to eliminate the XNOR function and its corresponding transistor(s) in each bit cell to reduce the size of the memory. Specific techniques for adjusting the activation threshold are as follows: [0066] A convolutional layer of a neural network (e.g., a binary neural network) may include cells (e.g., bit cells) organized into an array (e.g., a compute-in-memory array). The cells include gated devices in which electrical charge level present in the gated devices represent stored weight of the array. A trained XNOR binary neural network having an array M rows (e.g., size of the input vector) and N columns include MxN binary synaptic weights Wij, which are the weight in binary value [-1, +1], and the N activation thresholds Cj. The inputs to the array may correspond to word lines and the output may correspond to bit lines. For example, the input activations Xi, which are the input in binary value [-1,+1] , are X1, X2… XM. A sum of the products of the
inputs with corresponding weights is known as a weighted sum Yj=[0067] For example, when the weighted sum is greater than the activation threshold Cj, then the output is equal to one (1). Otherwise, the output is equal to zero (0). The XNOR binary neural network can be mapped into a non-XNOR binary neural network in the [0,1] space while eliminating the XNOR function and its corresponding transistor(s) in each bit cell to reduce the size of the memory. In one aspect, the XNOR binary neural network can be mapped into a non-XNOR binary neural network with an adjustment in the activation threshold Cj of each column, as follows: Equation 1 illustrates a relationship between the sum of the products of the inputs with corresponding weights and the activation threshold Cjwith respect to the XNOR binary neural network: (1)where - Xi(e. g., X1, X2… XM) is input in binary value [-1, +1] - Wijis weight in binary value [-1, +1] (e.g., MxN matrix); - Yj(e. g., Y1, Y2… YN) is the output of the bit lines in binary value [-1, +1] [0068] Conventionally, compute-in-memory binary neural network implementations are achieved by completing XNOR operations at each bit cell and summing the result for each bit line. Adding an XNOR function in each bit cell, however, increases layout area and increases power consumption. For example, the number of transistors in each cell in the memory (e.g., SRAM) increases from six or eight to twelve. Accordingly, aspects of the present disclosure are directed to transforming multiply and accumulate operations for a compute-in-memory array of an artificial neural network (e.g., a binary neural network) from the [-1,+1] space to the [0,1] space using activation threshold adjustments.
[0069] The binary neural network is converted from the [-1,+1] space (where Yj= to the [0,1] space and the output of the bit lines in the [0,1] space iscompared to a different threshold (e.g., the derivation of which is discussed below), as follows: (2)is input in binary value [0, 1] -is weight in binary value [0, 1] where corresponds to the adjustable activation (or adjusted activation threshold) in the [0,1] space andrepresents a conversion bias (e.g., a conversion bias current reference) in the [0,1] space. [0070] Adjusting the activation threshold as described herein allows for avoiding/forgoing implementing XNOR functionality in each bit cell while getting the same outcome as if the XNOR functionality was implemented. Adjusting the activation threshold enables the use of a simpler and smaller memory bit cell (e.g., an eight transistor (8T) SRAM). [0071] The following equations (3 and 4) include variables for mapping a network in the [-1,+1] space to the [0,1] space: (3) (4)[0072] Inserting the values of equations three (3) and four (4) into equation 1, an adjusted activation threshold can be determined through conversion or transformation between the [-1,+1] space and the [0,1] space. The following is a derivation of the adjusted activation threshold: (5)Expanding equation 5:(6) Comparing the output of the bit lines ^^in binary value [-1, +1] to the activation threshold Cjin the [-1, +1] space (as in equation 1) Yj> CjInserting the value of the output of the bit lines Yjin equation 6 to equation 1 to obtain a population count per bit line in the [0,1] space(7) (8) (9)[0073] Mapping the population count per bit line in the [0,1] space) in equation 9 to the population count per bit line in the [-1, +1] space), the N activation thresholds Cj in the [-1, +1] space maps to the adjustable activationsas well as conversion biasin the [0,1] space. For example, to achieve the non-XNOR binary neural network in the [0,1] space while eliminating the XNOR function and its corresponding transistor(s) the adjustable activations as well as the conversion biasin the [0,1] space are used. [0074] Referring to equation 9, the function is used to determinethe adjustable activations . For example, an element correspond to activationvalues of the adjustable activations. The adjustable activations do not depend onactivation inputsXi. The adjustable activations are also made up of predeterminedparameters (e.g., binary synaptic weights Wij) that can be changed to adjust activation thresholds Cj.
[0075] The function is used to determine the conversion biasTheconversion biasonly depends on the input activations. This means that the conversion bias is constantly changing as new inputs are received. [0076] For example, when the values ofandare calculated as follows with respect to equation 9:[0077] Similarly, when Cj= the values ofand are calculated as follows with respect to equation 9:[0078] In some aspects, the functioncan be used to generate generated the conversion bias from a reference column (e.g., a bit line that is not part of the N bitlines, which is used as a reference) by setting the weight in binary value equal to 1.Inserting the value ofequal to 1 in equation 9, the resulting equation is as follows: (9)[0079] The population count is equal towhich can be used to offset the activation current. Thus, only a single column is specified to determine a reference bias value or conversion bias for the whole memory array.
[0080] FIGURE 5 illustrates an exemplary architecture 500 for performing XNOR- equivalent operations by adjusting column thresholds of a compute-in-memory array of an artificial neural network, according to aspects of the present disclosure. The architecture includes a memory array including a reference column 502, a comparator 504, bit lines 506 (corresponding to columns of the memory array), and word lines 508 (corresponding to rows of the memory array). The input activations are received via word lines. In one example, the binary values of the input activations Xiare 10101011. [0081] The multiplication (e.g.XiWij) happens at each bitcell and the results of the multiplication from each bitcell are output through the bit lines 506. A summing device 510 associated with each of the bit lines 506 sums each output of the bitcells of the memory array and passes the result to the comparator 504. The comparator 504 may be part of the analog-to-digital converter (ADC), shown in FIGURE 4. In one aspect, the output of each bitcell associated with a first bit line is summed separately from the output of each bitcell associated with a second bit line. [0082] The activation threshold adjustment occurs at each column (corresponding to the bit lines) instead of at each bit cell, to improve area efficiency. For example, the sum of the output (or population count) of each bitcell associated with the first bit line is provided to the comparator 504 as a first input. A second input of the comparator includes a sum of the adjustable activationand the conversion bias(e.g., conversion bias current reference). For example, the conversion biascan be programmed into a criterion (e.g., the reference column 502) for each of the bit lines 506. When the population count is greater than the sum of the adjustable activationand the conversion biasthen the output of the comparator 504, which corresponds to the output of the first bit line, is a “1.” Otherwise, if the population count is less than or equal to the sum of the adjustable activationand the conversion biasthen the output of the comparator, which corresponds to the output of the first bit line, is a “0.” Thus, each bit line population count is compared to the sum of the adjustable activation and the conversion bias[0083] FIGURE 6 illustrates a method 600 for performing XNOR-equivalent operations by adjusting column thresholds of a compute-in-memory array of an artificial
neural network, in accordance with aspects of the present disclosure. As shown in FIGURE 6, at block 602, an activation threshold generated for each column of the compute-in-memory array can be adjusted based on a function of a weight value and an activation value. At block 604, a conversion bias current reference is calculated based on an input value from an input vector to the compute-in-memory array, the compute- in-memory array being programmed with a set of weight values. Each of the adjusted activation threshold and the conversion bias current reference is used as a threshold for determining the output values of the compute-in-memory array. The compute-in- memory array has both columns and rows. [0084] According to a further aspect of the present disclosure, an apparatus for performing XNOR-equivalent operations by adjusting column thresholds of a compute- in-memory array of an artificial neural network is described. The apparatus includes means for adjusting an activation threshold for each column of the array based on a function of a weight value and an activation value. The adjusting means includes the deep convolutional network 200, the deep convolutional network 350, the convolutional layer 232, the SoC 100, the CPU 102, the architecture 500, the architecture 400, and/or the convolutional block 354A. The apparatus further includes means for calculating a conversion bias current reference based on an input value from an input vector. The calculating means includes the deep convolutional network 200, the deep convolutional network 350, the convolutional layer 232, the SoC 100, the CPU 102, the architecture 500, the architecture 400 and/or the convolutional block 354A. [0085] The apparatus further includes means for comparing a bit line population count to a sum of the conversion bias current reference and the adjusted activation threshold in order to determine an output of a bit line. The comparing means includes the comparator 504 of FIGURE 5, and/or the analog-to-digital converter (ADC) of FIGURE 4. In another aspect, the aforementioned means may be any module or any apparatus configured to perform the functions recited by the aforementioned means. [0086] The various operations of methods described above may be performed by any suitable means capable of performing the corresponding functions. The means may include various hardware and/or software component(s) and/or module(s), including,
but not limited to, a circuit, an application specific integrated circuit (ASIC), or processor. Generally, where there are operations illustrated in the figures, those operations may have corresponding counterpart means-plus-function components with similar numbering. [0087] As used herein, the term “determining” encompasses a wide variety of actions. For example, “determining” may include calculating, computing, processing, deriving, investigating, looking up (e.g., looking up in a table, a database or another data structure), ascertaining and the like. Additionally, “determining” may include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory) and the like. Furthermore, “determining” may include resolving, selecting, choosing, establishing, and the like. [0088] As used herein, a phrase referring to “at least one of” a list of items refers to any combination of those items, including single members. As an example, “at least one of: a, b, or c” is intended to cover: a, b, c, a-b, a-c, b-c, and a-b-c. [0089] The various illustrative logical blocks, modules and circuits described in connection with the present disclosure may be implemented or performed with a general-purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array signal (FPGA) or other programmable logic device (PLD), discrete gate or transistor logic, discrete hardware components or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any commercially available processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. [0090] The steps of a method or algorithm described in connection with the present disclosure may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in any form of storage medium that is known in the art. Some examples of storage media that may
be used include random access memory (RAM), read only memory (ROM), flash memory, erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), registers, a hard disk, a removable disk, a CD-ROM and so forth. A software module may comprise a single instruction, or many instructions, and may be distributed over several different code segments, among different programs, and across multiple storage media. A storage medium may be coupled to a processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. [0091] The methods disclosed herein comprise one or more steps or actions for achieving the described method. The method steps and/or actions may be interchanged with one another without departing from the scope of the claims. In other words, unless a specific order of steps or actions is specified, the order and/or use of specific steps and/or actions may be modified without departing from the scope of the claims. [0092] The functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in hardware, an example hardware configuration may comprise a processing system in a device. The processing system may be implemented with a bus architecture. The bus may include any number of interconnecting buses and bridges depending on the specific application of the processing system and the overall design constraints. The bus may link together various circuits including a processor, machine-readable media, and a bus interface. The bus interface may be used to connect a network adapter, among other things, to the processing system via the bus. The network adapter may be used to implement signal processing functions. For certain aspects, a user interface (e.g., keypad, display, mouse, joystick, etc.) may also be connected to the bus. The bus may also link various other circuits such as timing sources, peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further. [0093] The processor may be responsible for managing the bus and general processing, including the execution of software stored on the machine-readable media.
The processor may be implemented with one or more general-purpose and/or special- purpose processors. Examples include microprocessors, microcontrollers, DSP processors, and other circuitry that can execute software. Software shall be construed broadly to mean instructions, data, or any combination thereof, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. Machine-readable media may include, by way of example, random access memory (RAM), flash memory, read only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable Read-only memory (EEPROM), registers, magnetic disks, optical disks, hard drives, or any other suitable storage medium, or any combination thereof. The machine-readable media may be embodied in a computer-program product. The computer-program product may comprise packaging materials. [0094] In a hardware implementation, the machine-readable media may be part of the processing system separate from the processor. However, as those skilled in the art will readily appreciate, the machine-readable media, or any portion thereof, may be external to the processing system. By way of example, the machine-readable media may include a transmission line, a carrier wave modulated by data, and/or a computer product separate from the device, all which may be accessed by the processor through the bus interface. Alternatively, or in addition, the machine-readable media, or any portion thereof, may be integrated into the processor, such as the case may be with cache and/or general register files. Although the various components discussed may be described as having a specific location, such as a local component, they may also be configured in various ways, such as certain components being configured as part of a distributed computing system. [0095] The processing system may be configured as a general-purpose processing system with one or more microprocessors providing the processor functionality and external memory providing at least a portion of the machine-readable media, all linked together with other supporting circuitry through an external bus architecture. Alternatively, the processing system may comprise one or more neuromorphic processors for implementing the neuron models and models of neural systems described herein. As another alternative, the processing system may be implemented with an
application specific integrated circuit (ASIC) with the processor, the bus interface, the user interface, supporting circuitry, and at least a portion of the machine-readable media integrated into a single chip, or with one or more field programmable gate arrays (FPGAs), programmable logic devices (PLDs), controllers, state machines, gated logic, discrete hardware components, or any other suitable circuitry, or any combination of circuits that can perform the various functionality described throughout this disclosure. Those skilled in the art will recognize how best to implement the described functionality for the processing system depending on the particular application and the overall design constraints imposed on the overall system. [0096] The machine-readable media may comprise a number of software modules. The software modules include instructions that, when executed by the processor, cause the processing system to perform various functions. The software modules may include a transmission module and a receiving module. Each software module may reside in a single storage device or be distributed across multiple storage devices. By way of example, a software module may be loaded into RAM from a hard drive when a triggering event occurs. During execution of the software module, the processor may load some of the instructions into cache to increase access speed. One or more cache lines may then be loaded into a general register file for execution by the processor. When referring to the functionality of a software module below, it will be understood that such functionality is implemented by the processor when executing instructions from that software module. Furthermore, it should be appreciated that aspects of the present disclosure result in improvements to the functioning of the processor, computer, machine, or other system implementing such aspects. [0097] If implemented in software, the functions may be stored or transmitted over as one or more instructions or code on a computer-readable medium. Computer- readable media include both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage medium may be any available medium that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to
carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Additionally, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared (IR), radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray® disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Thus, in some aspects computer-readable media may comprise non-transitory computer-readable media (e.g., tangible media). In addition, for other aspects computer-readable media may comprise transitory computer- readable media (e.g., a signal). Combinations of the above should also be included within the scope of computer-readable media. [0098] Thus, certain aspects may comprise a computer program product for performing the operations presented herein. For example, such a computer program product may comprise a computer-readable medium having instructions stored (and/or encoded) thereon, the instructions being executable by one or more processors to perform the operations described herein. For certain aspects, the computer program product may include packaging material. [0099] Further, it should be appreciated that modules and/or other appropriate means for performing the methods and techniques described herein can be downloaded and/or otherwise obtained by a user terminal and/or base station as applicable. For example, such a device can be coupled to a server to facilitate the transfer of means for performing the methods described herein. Alternatively, various methods described herein can be provided via storage means (e.g., RAM, ROM, a physical storage medium such as a compact disc (CD) or floppy disk, etc.), such that a user terminal and/or base station can obtain the various methods upon coupling or providing the storage means to the device. Moreover, any other suitable technique for providing the methods and techniques described herein to a device can be utilized.
[00100] It is to be understood that the claims are not limited to the precise configuration and components illustrated above. Various modifications, changes, and variations may be made in the arrangement, operation, and details of the methods and apparatus described above without departing from the scope of the claims. |
Methods and devices enable users of still or video image content to indicate objects within images in order to obtain more information regarding products of interest. Selected portions of an image or coordinates within the image may be included in a product query message transmitted to a server. The server receiving image information may process the information to recognize objects or particular products within the image selection. Recognized objects or products may be compared to a database of available merchandise to determine availability. Information regarding commercially available products may be included in a product information message transmitted to the user's computing device. Users may initiate a purchase transaction for recommended products based on the product information. The image may be broadcast by a variety of content delivery services including a mobile broadcast TV network. |
CLAIMS What is claimed is: 1. A method for facilitating merchandise transactions, comprising: receiving a product query message from a computing device, the product query message including image selection information regarding a selected portion of an image; processing the image selection information to identify a product object within the selected portion of the image; comparing the at least one object to a merchandise database to determine whether the object matches or corresponds to an available product; generating a product information message based upon whether the object matches or corresponds to an available product; and transmitting the product information message to the computing device. 2. The method of claim 1, wherein when it is determined that the object matches or corresponds to an available product generating the product information message comprises: obtaining information regarding the available product; and generating the product information message including the obtained information regarding the available product. 3. The method of claim 1, wherein when it is determined that the object does not match or correspond to an available product generating the Product Information message comprises: identifying an alternative available product; obtaining information regarding the identified alternative available product; and generating the product information message including the obtained information regarding the available product. 4. The method of claim 2, further comprising: identifying an additional available product that may be of interest to a user; and obtaining information regarding the additional available product; andgenerating the product information message including the obtained information regarding the alternative available product. 5. The method of claim 1, wherein the image selection information concerns an image provided by a content delivery system selected from the group of a mobile broadcast television network, a cable television network, a satellite television network, an Internet, and a video storage medium. 6. The method of claim 1 , wherein: the image selection information concerns an image provided by a mobile television (TV) broadcast; the computing device is a mobile device configured to receive mobile TV broadcast transmissions; the product query message is received via a unicast network; and the product information message is transmitted via the unicast network. 7. The method of claim 1, wherein: the Image selection information comprises image data; and processing the image selection information comprises processing the image data to identify an image object within the portion of the broadcast video 8. The method of claim 7, further comprising: comparing the identified image object to known products to identify the product object. 9. The method of claim 7, further comprising: comparing the identified image object to images in the merchandise database to identify the product object. 10. The method of claim 7, further comprising: comparing the identified image object to images of products known to have been placed in the broadcast video. 1 1. The method of claim 1 , wherein: the image selection information comprises a video frame identifier and a location within the frame, and processing the image selection information to identify a product object within the selected portion of the image comprises: using the video frame identifier and a location within the frame to obtain the selected portion of the image from a database; and processing the obtained selected portion of the image to identify an image object within the selected portion of the image. 12. The method of claim 1, wherein: the information regarding a portion of a broadcast video comprises image data; and processing the Image Selection Information comprises: forwarding the image data to a service that recognizes images; and receiving an object description from the service. 13. The method of claim 1, further comprising: selecting a most likely product of interest from a plurality of available products matching or corresponding to a plurality of product objects based upon annotation information included in the product query message. 14. The method of claim 1, further comprising: receiving a transaction request message from the computing device in response to the product information message; and facilitating a transaction in response to the transaction request message. 15. The method of claim 14, wherein facilitating a transaction comprises transmitting an electronic coupon to the computing device. 16. The method of claim 14, wherein facilitating a transaction comprises transmitting information to the computing device regarding a location of a source for a product indicated in the transaction request message. 17. The method of claim 1, wherein the product information message includes information regarding a location of a source for the available products. 18. A server, comprising: a processor; memory coupled to the processor; and a network access port coupled to the processor and configured to communicate with a network, wherein the processor is configured with processor-executable instructions to perform steps comprising: receiving a product query message from a computing device via the network, the product query message including image selection information regarding a selected portion of an image; processing the image selection information to identify a product object within the selected portion of the image; comparing the at least one object to a merchandise database to determine whether the object matches or corresponds to an available product; generating a product information message based upon whether the object matches or corresponds to an available product; and transmitting the product information message to the computing device via the network. 19. The server of claim 18, wherein the processor is configured with processor- executable instructions such that when it is determined that the object matches or corresponds to an available product the processor performed step of generating the product information message comprises: obtaining information regarding the available product; andgenerating the product information message including the obtained information regarding the available product. 20. The server of claim 18, wherein the processor is configured with processor- executable instructions such that when it is determined that the object does not match or correspond to an available product the processor performed step of generating the product information message comprises: identifying an alternative available product; obtaining information regarding the identified alternative available product; and generating the product information message including the obtained information regarding the available product. 21. The server of claim 19, wherein the processor is configured with processor- executable instructions to perform steps further comprising: identifying an additional available product that may be of interest to a user; and obtaining information regarding the additional available product; and generating the product information message including the obtained information regarding the alternative available product. 22. The server of claim 18, wherein the processor is configured with processor- executable instructions such that the image selection information concerns an image provided by a content delivery system selected from the group of a mobile broadcast television network, a cable television network, a satellite television network, an Internet, and a video storage medium. 23. The server of claim 18, wherein the processor is configured with processor- executable instructions such that: the image selection information concerns an image provided by a mobile television (TV) broadcast; the computing device is a mobile device configured to receive mobile TV broadcast transmissions; the product query message is received via a unicast network; andthe product information message is transmitted via the unicast network. 24. The server of claim 18, wherein the processor is configured with processor- executable instructions such that: the image selection information comprises image data; and processing the image selection information comprises processing the image data to identify an image object within the portion of the broadcast video 25. The server of claim 24, wherein the processor is configured with processor- executable instructions to perform steps further comprising: comparing the identified image object to known products to identify the product object. 26. The server of claim 24, wherein the processor is configured with processor- executable instructions to perform steps further comprising: comparing the identified image object to images in the merchandise database to identify the product object. 27. The server of claim 24, wherein the processor is configured with processor- executable instructions to perform steps further comprising: comparing the identified image object to images of products known to have been placed in the broadcast video. 28. The server of claim 18, wherein the processor is configured with processor- executable instructions such that: the image selection information comprises a video frame identifier and a location within the frame, and processing the image selection information to identify a product object within the selected portion of the image comprises: using the video frame identifier and a location within the frame to obtain the selected portion of the image from a database; andprocessing the obtained selected portion of the image to identify an image object within the selected portion of the image. 29. The server of claim 18, wherein the processor is configured with processor- executable instructions such that: the information regarding a portion of a broadcast video comprises image data; and processing the Image Selection Information comprises: forwarding the image data to a service that recognizes images; and receiving an object description from the service. 30. The server of claim 18, wherein the processor is configured with processor- executable instructions to perform steps further comprising: selecting a most likely product of interest from a plurality of available products matching or corresponding to a plurality of product objects based upon annotation information included in the product query message. 31. The server of claim 18, wherein the processor is configured with processor- executable instructions to perform steps further comprising: receiving a transaction request message from the computing device in response to the product information message; and facilitating a transaction in response to the transaction request message. 32. The server of claim 31, wherein the processor is configured with processor- executable instructions such that facilitating a transaction comprises transmitting an electronic coupon to the computing device. 33. The server of claim 31, wherein the processor is configured with processor- executable instructions such that facilitating a transaction comprises transmitting information to the computing device regarding a location of a source for a product indicated in the transaction request message. 34. The server of claim 18, wherein the processor is configured with processor- executable instructions such that the product information message includes information regarding a location of a source for the available products. 35. A server, comprising: means for receiving a product query message from a computing device, the product query message including image selection information regarding a selected portion of an image; means for processing the image selection information to identify a product object within the selected portion of the image; means for comparing the at least one object to a merchandise database to determine whether the object matches or corresponds to an available product; means for generating a product information message based upon whether the object matches or corresponds to an available product; and means for transmitting the product information message to the computing device. 36. The server of claim 35, wherein when it is determined that the object matches or corresponds to an available product generating the product information message comprises: means for obtaining information regarding the available product; and means for generating the product information message including the obtained information regarding the available product. 37. The server of claim 35, wherein when it is determined that the object does not match or correspond to an available product means for generating the Product Information message comprises: means for identifying an alternative available product; means for obtaining information regarding the identified alternative available product; and means for generating the product information message including the obtained information regarding the available product. 38. The server of claim 36, further comprising: means for identifying an additional available product that may be of interest to a user; and means for obtaining information regarding the additional available product; and means for generating the product information message including the obtained information regarding the alternative available product. 39. The server of claim 35, wherein the image selection information concerns an image provided by a content delivery system selected from the group of a mobile broadcast television network, a cable television network, a satellite television network, an Internet, and a video storage medium. 40. The server of claim 35, wherein: the image selection information concerns an image provided by a mobile television (TV) broadcast; the computing device is a mobile device configured to receive mobile TV broadcast transmissions; the product query message is received via a unicast network; and the product information message is transmitted via the unicast network. 41. The server of claim 35, wherein: the image selection information comprises image data; and means for processing the image selection information comprises means for processing the image data to identify an image object within the portion of the broadcast video 42. The server of claim 41, further comprising: means for comparing the identified image object to known products to identify the product object. 43. The server of claim 41, further comprising: means for comparing the identified image object to images in the merchandise database to identify the product object. 44. The server of claim 41, further comprising: means for comparing the identified image object to images of products known to have been placed in the broadcast video. 45. The server of claim 35, wherein: the image selection information comprises a video frame identifier and a location within the frame, and means for processing the image selection information to identify a product object within the selected portion of the image comprises: means for using the video frame identifier and a location within the frame to obtain the selected portion of the image from a database; and means for processing the obtained selected portion of the image to identify an image object within the selected portion of the image. 46. The server of claim 35, wherein: the information regarding a portion of a broadcast video comprises image data; and means for processing the Image Selection Information comprises: means for forwarding the image data to a service that recognizes images; and means for receiving an object description from the service. 47. The server of claim 35, further comprising: means for selecting a most likely product of interest from a plurality of available products matching or corresponding to a plurality of product objects based upon annotation information included in the product query message. 48. The server of claim 35, further comprising: means for receiving a transaction request message from the computing device in response to the product information message; and means for facilitating a transaction in response to the transaction request message. 49. The server of claim 48, wherein means for facilitating a transaction comprises means for transmitting an electronic coupon to the computing device. 50. The server of claim 48, wherein means for facilitating a transaction comprises means for transmitting information to the computing device regarding a location of a source for a product indicated in the transaction request message. 51. The server of claim 35, wherein the product information message includes information regarding a location of a source for the available products. 52. A computer program product, comprising: a computer readable storage medium comprising: at least on instruction for receiving a product query message from a computing device, the product query message including image selection information regarding a selected portion of an image; at least on instruction for processing the image selection information to identify a product object within the selected portion of the image; at least on instruction for comparing the at least one object to a merchandise database to determine whether the object matches or corresponds to an available product; at least on instruction for generating a product information message based upon whether the object matches or corresponds to an available product; and at least on instruction for transmitting the product information message to the computing device. 53. The computer program product of claim 52, wherein the computer readable storage medium further comprises at least one instruction such that when it is determined that theobject matches or corresponds to an available product the at least one instruction for generating the product information message comprises: at least one instruction for obtaining information regarding the available product; and at least one instruction for generating the product information message including the obtained information regarding the available product. 54. The computer program product of claim 52, wherein the computer readable storage medium further comprises at least one instruction such that when it is determined that the object does not match or correspond to an available product the at least one instruction for generating the product information message comprises: at least one instruction for identifying an alternative available product; at least one instruction for obtaining information regarding the identified alternative available product; and at least one instruction for generating the product information message including the obtained information regarding the available product. 55. The computer program product of claim 53, wherein the computer readable storage medium further comprises: at least one instruction for identifying an additional available product that may be of interest to a user; and at least one instruction for obtaining information regarding the additional available product; and at least one instruction for generating the product information message including the obtained information regarding the alternative available product. 56. The computer program product of claim 52, wherein the computer readable storage medium further comprises at least one instruction such that the image selection information concerns an image provided by a content delivery system selected from the group of a mobile broadcast television network, a cable television network, a satellite television network, an Internet, and a video storage medium. 57. The computer program product of claim 52, wherein the computer readable storage medium further comprises at least one instruction such that: the image selection information concerns an image provided by a mobile television (TV) broadcast; the computing device is a mobile device configured to receive mobile TV broadcast transmissions; the product query message is received via a unicast network; and the product information message is transmitted via the unicast network. 58. The computer program product of claim 52, wherein the computer readable storage medium further comprises at least one instruction such that: the Image selection information comprises image data; and the at least one instruction for processing the image selection information at least one instruction for comprises processing the image data to identify an image object within the portion of the broadcast video 59. The computer program product of claim 58, wherein the computer readable storage medium further comprises: at least one instruction for comparing the identified image object to known products to identify the product object. 60. The computer program product of claim 58, wherein the computer readable storage medium further comprises: at least one instruction for comparing the identified image object to images in the merchandise database to identify the product object. 61. The computer program product of claim 58, wherein the computer readable storage medium further comprises: at least one instruction for comparing the identified image object to images of products known to have been placed in the broadcast video. 62. The computer program product of claim 52, wherein the computer readable storage medium further comprises at least one instruction such that: the image selection information comprises a video frame identifier and a location within the frame, the method further comprising: the at least one instruction for processing the image selection information to identify a product object within the selected portion of the image comprises: at least one instruction for using the video frame identifier and a location within the frame to obtain the selected portion of the image from a database; and at least one instruction for processing the obtained selected portion of the image to identify an image object within the selected portion of the image. 63. The computer program product of claim 52, wherein the computer readable storage medium further comprises at least one instruction such that: the information regarding a portion of a broadcast video comprises image data; and the at least one instruction for processing the Image Selection Information comprises: at least one instruction for forwarding the image data to a service that recognizes images; and at least one instruction for receiving an object description from the service. 64. The computer program product of claim 52, wherein the computer readable storage medium further comprises: at least one instruction for selecting a most likely product of interest from a plurality of available products matching or corresponding to a plurality of product objects based upon annotation information included in the product query message. 65. The computer program product of claim 52, wherein the computer readable storage medium further comprises: at least one instruction for receiving a transaction request message from the computing device in response to the product information message; andat least one instruction for facilitating a transaction in response to the transaction request message. 66. The computer program product of claim 65, wherein the at least one instruction for facilitating a transaction comprises at least one instruction for transmitting an electronic coupon to the computing device. 67. The computer program product of claim 65, wherein the at least one instruction for facilitating a transaction comprises at least one instruction for transmitting information to the computing device regarding a location of a source for a product indicated in the transaction request message. 68. The computer program product of claim 52, wherein the product information message includes information regarding a location of a source for the available products. 69. A method for inquiring about a product viewed in an image, comprising: displaying the image; receiving a user input designating a portion of the image; generating a product query message including image selection information regarding the designated portion of the image; transmitting the product query message to a transaction server; receiving a product information message; and displaying a product information included in the product information message. 70. The method of claim 69, further comprising: receiving a video stream, wherein the image is an image within the video stream; displaying the received video stream; pausing the display of the video stream to display a still video image in response to a user input; and continuing the display of the video stream after receiving the user input designating a portion of the still video image. 71. The method of claim 70, further comprising including an identifier of the still video image in the product query message. 72. The method of claim 71 , wherein the image selection information comprises coordinate information. 73. The method of claim 69, wherein the image selection information comprises image data. 74. The method of claim 69, further comprising: prompting the user to input a comment related to the designated portion of the video image; receiving a user input; and including the user input as annotation information in the product query message. 75. The method of claim 69, wherein the image is received from a content delivery system selected from the group of a mobile broadcast television network, a cable television network, a satellite television network, an Internet, and a video storage medium. 76. The method of claim 69, wherein: the image is received within a mobile TV broadcast; the product query message is transmitted via a unicast network; and the product information message is received via the unicast network. 77. The method of claim 69, further comprising: providing a transaction user interface in conjunction with displaying the product information; receiving a user input to conduct a transaction in response to displaying the product information; and transmitting a transaction request message in response to receiving the user input via the transaction user interface. 78. A computing device, comprising: a processor; a display coupled to the processor; and a transceiver coupled to the processor and configured to communicate with a network, wherein the processor is configured with processor-executable instructions to perform steps comprising: displaying an image on the display; receiving a user input designating a portion of the image; generating a product query message including image selection information regarding the designated portion of the image; transmitting the product query message to a transaction server via the transceiver; receiving a product information message via the transceiver; and displaying a product information included in the product information message. 79. The computing device of claim 78, wherein the processor is configured with processor-executable instructions to perform further steps comprising: receiving a video stream via the transceiver, wherein the image is an image within the video stream; displaying the received video stream; pausing the display of the video stream to display a still video image in response to a user input; and continuing the display of the video stream after receiving the user input designating a portion of the still video image. 80. The computing device of claim 79, wherein the processor is configured with processor-executable instructions to perform further steps comprising including an identifier of the still video image in the product query message. 81. The computing device of claim 80, wherein the processor is configured with processor-executable instructions such that the image selection information comprises coordinate information. 82. The computing device of claim 78, wherein the processor is configured with processor-executable instructions such that the image selection information comprises image data. 83. The computing device of claim 78, wherein the processor is configured with processor-executable instructions to perform further steps comprising: prompting the user to input a comment related to the designated portion of the video image; receiving a user input; and including the user input as annotation information in the product query message. 84. The computing device of claim 78, wherein the processor is configured with processor-executable instructions to perform further steps comprising receiving the image from a content delivery system selected from the group of a mobile broadcast television network, a cable television network, a satellite television network, an Internet, and a video storage medium. 85. The computing device of claim 78, further comprising a mobile television broadcast receiver coupled to the processor, wherein the processor is configured with processor- executable instructions to perform further steps comprising: receiving the image within a mobile TV broadcast; transmitting the product query message via a unicast network accessed via the transceiver; and receiving the product information message from the unicast network via the transceiver. 86. The computing device of claim 78, wherein the processor is configured with processor-executable instructions to perform further steps comprising: providing a transaction user interface in conjunction with displaying the product information; receiving a user input to conduct a transaction in response to displaying the product information; and transmitting a transaction request message in response to receiving the user input via the transaction user interface. 87. A computing device, comprising: means for displaying the image; means for receiving a user input designating a portion of the image; means for generating a product query message including image selection information regarding the designated portion of the image; means for transmitting the product query message to a transaction server; means for receiving a product information message; and means for displaying a product information included in the product information message. 88. The computing device of claim 87, further comprising: means for receiving a video stream, wherein the image is an image within a video stream; means for displaying the received video stream; means for pausing the display of the video stream to display a still video image in response to a user input; and means for continuing the display of the video stream after receiving the user input designating a portion of the still video image. 89. The computing device of claim 88, further comprising means for including an identifier of the still video image in the product query message. 90. The computing device of claim 89, wherein the image selection information comprises coordinate information. 91. The computing device of claim 87, wherein the image selection information comprises image data. 92. The computing device of claim 87, further comprising: means for prompting the user to input a comment related to the designated portion of the video image; means for receiving a user input; and means for including the user input as annotation information in the product query message. 93. The computing device of claim 87, further comprising means for receiving the image from a content delivery system selected from the group of a mobile broadcast television network, a cable television network, a satellite television network, an Internet, and a video storage medium. 94. The computing device of claim 87, further comprising: means for receiving the image within a mobile TV broadcast; means for transmitting the product query message via a unicast network; and means for receiving the product information message via the unicast network. 95. The computing device of claim 87, further comprising: means for providing a transaction user interface in conjunction with displaying the product information; means for receiving a user input to conduct a transaction in response to displaying the product information; and means for transmitting a transaction request message in response to receiving the user input via the transaction user interface. 96. A computer program product, comprising: a computer readable storage medium comprising: at least one instruction for displaying an image; at least one instruction for receiving a user input designating a portion of the image; at least one instruction for generating a product query message including image selection information regarding the designated portion of the image; at least one instruction for transmitting the product query message to a transaction server; at least one instruction for receiving a product information message; and at least one instruction for displaying a product information included in the product information message. 97. The computer program product of claim 96, wherein the computer readable storage medium further comprises: at least one instruction for receiving a video stream, wherein the image is an image within the video stream; at least one instruction for displaying the received video stream; at least one instruction for pausing the display of the video stream to display a still video image in response to a user input; and at least one instruction for continuing the display of the video stream after receiving the user input designating a portion of the still video image. 98. The method of claim 97, wherein the computer readable storage medium further comprises at least one instruction for including an identifier of the still video image in the product query message. 99. The method of claim 98, wherein the image selection information comprises coordinate information. 100. The computer program product of claim 96, wherein the image selection information comprises image data. 101. The computer program product of claim 96, wherein the computer readable storage medium further comprises: at least one instruction for prompting the user to input a comment related to the designated portion of the video image; at least one instruction for receiving a user input; and at least one instruction for including the user input as annotation information in the product query message. 102. The computer program product of claim 96, wherein the computer readable storage medium further comprises at least one instruction for receiving the image from a content delivery system selected from the group of a mobile broadcast television network, a cable television network, a satellite television network, an Internet, and a video storage medium. 103. The computer program product of claim 96, wherein the computer readable storage medium further comprises: at least one instruction for receiving the image within a mobile TV broadcast; at least one instruction for transmitting the product query message via a unicast network; and at least one instruction for receiving the product information message via the unicast network. 104. The computer program product of claim 96, wherein the computer readable storage medium further comprises: at least one instruction for providing a transaction user interface in conjunction with displaying the product information; at least one instruction for receiving a user input to conduct a transaction in response to displaying the product information; and at least one instruction for transmitting a transaction request message in response to receiving the user input via the transaction user interface. |
SYSTEMS AND METHODS FOR MERCHANDISING TRANSACTIONS VIA IMAGE MATCHING IN A CONTENT DELIVERY SYSTEM BACKGROUND [0001] Digital communication technologies have seen explosive growth over the past few years. This growth has been fueled by new content delivery technologies, including the Internet and new wireless services such as mobile broadcast television. With this new content delivery technologies have come increased consumer demand for audio and video content, as well as new opportunities for marketing products and responding to consumer demand. SUMMARY [0002] The various embodiments provide methods and systems that enable users of still images or video content to select products appearing within the images displayed on a computing device and request information regarding such products. The still images or video content may be received from a variety of content delivery systems including, for example, mobile broadcast television, cable television services, satellite television services, the Internet, stored video (e.g., DVD or Tivo®), and video shot by the user. Information regarding the selected parts of a still or video image in the form of selected portions of the video image or image coordinates within the image sufficient to enable a server to obtain the selected portion of the image may be included in a product query message that the user's computing device transmits to a processing server (referred to herein as a "Transaction Server"). The image information received in a product query message may be processed by the Transaction Server to recognize objects or particular products within the selected portion of the image. Recognized objects or products may be compared to a database of available merchandise to determine their commercial availability. Information regarding commercially available products may be included in a product information message that the Transaction Server generates and transmits to the user's computing device. Recommended products may be presented in a user interface display so that users can select a product for purchase. Users may then initiate a purchase transaction for one or more of the recommended products, such as by interacting with the user interface. Product purchase transactions may be accomplished according to any known transaction method. In the embodiments in which the still orvideo image content is broadcast, such as by a mobile TV broadcast network, the product query messages may be processed by a Transaction Server that is part of the broadcast provider. In embodiments in which the still or video image content is broadcast or unicast over an Internet protocol (IP) network, such as the Internet, the Transaction Server may be a Web server. In embodiments in which the video content is broadcast by a cable or satellite television network, the Transaction Server may be a Web Server accessible via an IP network. BRIEF DESCRIPTION OF THE DRAWINGS [0003] The accompanying drawings, which are incorporated herein and constitute part of this specification, illustrate exemplary embodiments of the invention, and together with the general description given above and the detailed description given below, serve to explain the features of the invention. [0004] FIG. 1 is a communication system block diagram illustrating a communication system suitable for use with various embodiments. [0005] FIG. 2 is a communication system block diagram illustrating an Internet-based communication network suitable for use with various embodiments. [0006] FIG. 3 is a system functionality block diagram of server functionality modules suitable for use with various embodiments. [0007] FIGs. 4A and 4B are process flow diagrams of an embodiment method for enabling mobile TV broadcast users to identify products for purchase within broadcast content. [0008] FIG. 5 is a message flow diagram of example messages that may be passed among various system components in the embodiment method illustrated in FIGs. 4 A and 4B. [0009] FIG. 6 is a process flow diagram of an embodiment method for implementation within a computing device for enabling users to inquire about products of interest seen in broadcast content.[0010] FIG. 7 is a process flow diagram of an embodiment method for implementation within a computing device for enabling users to complete a purchase based on product information received in response to a product query. [0011] FIG. 8 is a process flow diagram of an embodiment method for processing a product query message received from a computing device and generating a product information message to the mobile device. [0012] FIG. 9 is a process flow diagram of another embodiment method for processing a product query message received from a computing device and generating a product information message to the mobile device. [0013] FIG. 10 is a process flow diagram of another embodiment method for processing a product query message received from a mobile device and generating a product information message to the mobile device. [0014] FIG. 11 is a process flow diagram of another embodiment method for processing a product query message received from a computing device and generating a product information message to the mobile device. [0015] FIG. 12 is a process flow diagram of an embodiment method for generating a coupon in response to a coupon request message received from a computing device. [0016] FIG. 13 is a process flow diagram of an embodiment method for completing a purchase transaction according to an embodiment. [0017] FIG. 14 is a process flow diagram of an embodiment method for reminding a user of a product of interest based upon geographic proximity to a source for such product. [0018] FIG. 15 is a component block diagram of a mobile device suitable for use with various embodiments. [0019] FIG. 16 is a component block diagram of a personal computer suitable for use with various embodiments.[0020] FIG. 17 is a component block diagram of a server suitable for use with various embodiments. DETAILED DESCRIPTION [0021] The various embodiments will be described in detail with reference to the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts. References made to particular examples and implementations are for illustrative purposes, and are not intended to limit the scope of the invention or the claims. [0022] The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any implementation described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other implementations. [0023] As used herein, the term "computing device" refers generally to any device including a processor that may be programmed and configured to accomplish any of the various embodiments. As used herein, the terms "mobile device" and "handheld device" refer to any one or all of cellular telephones, personal data assistants (PDA's), palm-top computers, wireless electronic mail receivers (e.g., the Blackberry® and Treo® devices), multimedia Internet enabled cellular telephones (e.g., the Blackberry Storm®), Global Positioning System (GPS) receivers, wireless gaming controllers, and similar computing devices which include a programmable processor and memory and receiver circuitry for receiving and processing still and video image content. In an embodiment a mobile device includes receiver circuitry for receiving and processing mobile broadcast television services. [0024] The term "unicast network" is used herein to refer to communication networks which transmit data to a single destination. Examples of a unicast network include WiFi and cellular data communication networks. Examples of unicast transmissions include simple message service (SMS), multimedia message service (MMS), and electronic mail messages as may be carried via a cellular telephone data communication network. [0025] The word "broadcast" is used herein to mean the transmission of data (information packets) so that it can be received by a large number of receiving devices.Examples of a broadcast message are mobile television (TV) broadcast transmissions as well as digital television, IP multicast programs, cable television cablecasts and satellite television broadcasts. Although broadcast content delivery system is a type of content delivery system, the embodiments are not limited to broadcast content delivery systems as the embodiments may also be implemented with stored video images, such as stored in a digital video disc (DVD), a digital television storage device (e.g., Tivo®) or a web server accessed via the Internet (e.g., youtube.com). The embodiments may further be implemented with video images shot by the user, such as video shot on a mobile device like a cellular telephone. [0026] The various embodiments utilize a number of process modules, software programs or analysis engines that may be implemented within one or more server computers to accomplish the embodiment methods. While such process modules, software programs or analysis engines may be implemented in a variety of architectures, including as a single processing module, for ease of description herein reference is made to separate processing modules whose names and functions defined in the following paragraphs. [0027] As used herein, "Transaction Server" refers to a server or network of servers to which receive product query messages, process included or referenced images to identify products, and accomplish other product recommendation processes described herein. A Transaction Server may include a "Transaction Gateway" which may be a module that facilitates transaction communications with computing devices via a network, such as the Internet. [0028] As used herein, "Image Selection" refers to a act of an end user (e.g., a video consumer) selecting a portion of an image (e.g., within a frame of video content) that contains an object of interest, and "Image Selection Information" refers to the selected portion of the image (i.e., image data such as pixel values) or to information about the Image Selection that can be used to obtain the selected portion of the image from memory or an accessible database (e.g., a file identifier for the image or a frame number of the video and image coordinates within the frame). The Image Selection Information is transmitted to a Transaction Server in a Product Query message so the TransactionServer either receives the selected portion of the image or can obtain the Image Selection from memory or an accessible database. [0029] As used herein, "Annotation Information" refers to optional information that an end user could give to qualify the object of interest that is sent along with or in addition to the Image Selection Information in a Product Query message. Annotation Information may be in the form of text inputs, a voice memo, menu selections, etc. [0030] As used herein, "User Profile" refers to information about a particular end user, such as age, gender, salary range, purchase history, personal preferences, etc. User Profile information may be provided by a user and/or observed by a computing device or the Transaction Server based upon the user's activities (e.g., purchases, Product Queries, mobile device uses, etc.). The User Profile may also include information about the end user's computing device or mobile device, such as delivery or display capabilities, music download accounts, etc., that may be relevant to recommending products for purchase. [0031] As used herein, "Product Query" messages are messages that convey Image Selection Information along with Annotation Information to the Transaction Server. [0032] As used herein, "Product Correlation Engine" refers to a process module or software module implemented on a server, such as within the Transaction Server which matches objects of interest to products from merchandisers' databases. As used herein, "Product Correlation" refers to the methods implemented by the Product Correlation Engine's based upon information received in a Product Query. [0033] As used herein, "Recommendation Engine" refers to a process module or software module implemented on a server, such as within the Transaction Server, which recommends products to an end user based on a Product Query, a User Profile or other information. [0034] As used herein, "Product Information message" refers to a message produced by the processing of the Product Correlation Engine and Recommendation Engine that contains product pricing/coupon information for products that match the Product Query and User Profile. The Product Information message need not always be sent in responseto a Product Query, and may be generated at a later time. For example similar to advertisements, a Product Information may be "pushed" to end users based on information in a User Profile, such as a previous Product Query or purchase transaction. For example, if a user requests a black leather jacket in a Purchase Query including Image Selection Information selected from the Terminator movie and in response is sent Product Information message including jacket information on one day, another Product Information message may be sent the next day including information about sunglasses featured in Terminator movie. [0035] Still images and video programs are being delivered to consumers in a wide variety of formats now. Consumers now receive visual content from satellite and cable television networks, broadcast digital television networks, and the Internet. Noteworthy is the recent develop of mobile TV broadcast services which have begun delivering video content to mobile users. A number of different mobile TV broadcast services and broadcast standards are available or contemplated in the future, all of which may implement and benefit from the various embodiments. Such services and standards include Open Mobile Alliance Mobile Broadcast Services Enabler Suite (OMA BCAST), MediaFLO, Digital Video Broadcast IP Datacasting (DVB-IPDC), and China Multimedia Mobile Broadcasting (CMMB). [0036] Mobile TV users and mobile Internet users are different from the conventional home television audience in that they use viewing devices (i.e., mobile devices) that they carry with them which received mobile TV broadcast services or Internet multicast program and can communicate via unicast messages. Additionally, mobile devices can be personalized to users since in the majority of instances only one person uses a mobile device. Still further, mobile devices now typically include video cameras that enable users to film video images, such as images of products of interest to them. Typically, users carry their mobile devices with them everywhere, including while shopping. Thus, the mobility and access to the Internet and mobile TV broadcasting services mean that mobile devices may be highly valuable marketing and electronic transaction tools. [0037][0038] The various embodiments enable end users to select image portions or image data that correlates with merchandise displayed within broadcast programs, multicast programs, stored video images (e.g., from a DVD, Blu-ray disc player, or Tivo®) or a user video, and initiate order transactions for such products or related merchandise on their computing device, such as a personal computer or mobile devices. A client application running on the end user's computing device enables the user to identify products of interest by designating a portion of a displayed image containing a product or products of interest. The user may designate the selected portion of the still or video image using a pointing device, such as by drawing a circle around the image portion with a computer mouse or with a finger tracing a circle on a touchscreen display. The selected portion of the image including the object of interest may be packaged as Image Selection Information within a Product Query message that the mobile device transmits to a Transaction Server. Instead of transmitting the image data itself, the Image Selection Information within the Product Query may include information such as a frame number or image file name and coordinates within the image that define the selected portion so that the Transaction Server can obtain the image from an accessible database and determine the selected portion. The Product Query message forwarding the Image Selection Information may also include additional information ("Annotation Information"), such as voice notes or typed comments from the user regarding the product of interest. Such a Product Query message may be received in a Transaction Server where the message is parsed to obtain the Image Selection Information, Annotation Information, and other additional information, such as an identifier (ID) of the mobile device. The Image Selection Information may be processed by a Product Correlation engine which may be a software module within the Transaction Server or another server that is configured to process images in order to recognize particular product characteristics, such as shape, color and configuration. Image objects identified within the image selection may be compared to a database of merchandise to determine if the object(s) in the image selection matches or corresponds to available merchandise. If a match is found, information regarding the make, model, source, cost, and other purchasing details for the product may be obtained from the merchandise database, and assembled into a Product Information message that is transmitted to the mobile device. The Product Information message may also include other related products beyond thosethat match, such as products which a Recommendation Engine predicts will be of interest to the end user based upon the matched product and information in the user's User Profile. If a direct match between the image selection and a product available in the marketplace is not found, the image selection may correspond to a product such as a later or replacement model of the imaged product, an equivalent competitor's product, or a replica of the imaged product. Therefore, if a product match or correspondence is not found, the Recommendation Engine may process the recognized image object(s) along with any received Annotation Information, User Profile information about the user and the mobile device, and other information to generate a recommendation of alternative merchandise that might be of interest to the user. As in the case of a matched product, recommended alternative merchandise information may be assembled into a Product Information message that is transmitted to the end user's computing device. [0039] When a computing device receives a Product Information message, a client application running on the device may parse the message to obtain information regarding the selected product and/or recommended alternative products, and generate a user interface display to enable the user to initiate a purchase transaction. This user interface display may be positioned outside the video or image content portion of the display to avoid interrupting the user's video content consumption. If a user decides to purchase or order any recommended product, the transaction may be promptly accomplished using a variety of electronic transaction methods. For example, when the user has transmitted a Product Query message from a mobile device, that device may place a data call to an Internet connection and access a merchant server to purchase the selected product online. As another example, the user's computing device may connect to a merchant server or Transaction Server to receive a coupon towards purchase of the product at a merchant's "brick and mortar" storefront. As a third example, information regarding the location of stores where the recommended product(s) can be purchased may be stored within a geographic information services (GIS) application so that when the user is in the vicinity of such a store the user's mobile device can alert the user and provide driving or walking directions. Regardless of the manner in which a transaction is completed, the user's computing device may report information to the Transaction Server or the merchant that links the transaction to the Product Information message and/or the Product Querymessage. Products within image selections provided in Product Query messages may be recognized using a variety of techniques, including for example image recognition algorithms, matching image information to product placement information supplied by content providers, and humans viewing the images and recognizing products manually. [0040] An example communication system including a mobile TV broadcast network 100 suitable for use with the various embodiments is illustrated in FIG. 1. While the various embodiments are not limited to a mobile broadcast television content delivery system, the mobile broadcast TV communication system represents a preferred embodiment implementation and includes system components that are representative of components included in other types of content delivery systems. A mobile TV broadcast network 100 may receive content for broadcast from one or more content providers 101 via a network, such as the Internet, in a receiver decoder server 103. Received content may be processed by a transcoder 104 that places the received content in a format that can be broadcast to mobile devices. Trans-coded content may then be passed to a broadcaster 1 14 which places the broadcast content into multiplex that broadcast transmission signals which are broadcast by broadcast sites 116. A scheduler server 108 within the mobile TV broadcast network 100 may coordinate the delivery of content to the broadcaster 114 as well as generate program schedule information that are broadcast to mobile devices in an overhead portion of broadcast transmissions. Communication among the various components within the mobile TV broadcast network 100 may be accomplished via a local area network 102. [0041] Within or coupled to the mobile TV broadcast network 100 may be a transaction module 110, which may be coupled to other components within the broadcast network 100 via a local area network 102. The purchase transaction module 1 10 may include a Transaction Server 1 12 and a merchandise database 106. The Transaction Server 1 12 may be coupled via a network or the Internet to a wireless network provider 122 in order to receive unicast messages from mobile devices 118. The Transaction Server 1 12 may also be coupled to product manufacturers and merchants 124 via a network or the Internet in order to receive information regarding available merchandise, track transactions related to the user Product Queries, complete transactions conducted through the Transaction Server 112, and inform merchants 124 of user interests in variousproducts. Information regarding merchandise available for purchase, and other merchandise-related information necessary to support the functionality of the Transaction Server 1 12 may be stored within a merchandise database 106. Merchants and manufacturers 124 may also be able to store merchandise information in the merchandise database 106. [0042] While FIG. 1 shows the transaction module 110, including the Transaction Server 1 12 and merchandise database 106 as being within the mobile TV broadcast network 100, these components may be located outside the broadcaster's network, including being operated by third parties. Further, the merchandise database 106 may be located remotely, such as within a merchant server, and may comprise a number of databases, such as merchandise databases of a number of different merchants subscribing to a transaction service. Further the Transaction Server 1 12 may be accessible via the Internet and configured to receive Product Queries from any type of computing device, and thus need not be limited to responding to Product Queries received from mobile devices 118. [0043] In operation, still and video image content is broadcast by the mobile TV broadcast network 100 via broadcast sites 1 16 and received by mobile devices 118. Users can use their mobile devices 118 to view selected broadcast programs. When a user sees an object of interest within a broadcast program, the user may select the portion of a video image containing the object of interest, such as by tracing a circle around it with a fingertip on a touchscreen display. A client application operating in a processor within the mobile device 1 18 may use the indicated portion of the video image to generate a Product Query message. The Product Query message may be transmitted via a unicast network, such as a wireless data network 122, to the Transaction Server 112. In such a transmission, wireless data messages from the mobile device 118 may be received by a wireless node antenna 120 and forwarded by the wireless data network 122 to another network, such as the Internet, to the Transaction Server 112. The Product Query message from a mobile device 1 18 to the Transaction Server 112 may be relayed using well-known communication methods and systems, such as cellular data networks and wireless wide area networks (e.g., WiFi), as well as wired network communications, such as the Internet.[0044] The various embodiments may also be used in connection with non-broadcast content, such as unicast and multicast still and video image content available via the Internet as illustrated in FIG. 2 which shows communication system 200. Content providers 101 may distribute video content via a Content Delivery Network 202. The Content Delivery Network may be unicast wireless, broadcast wireless, Internet, cable TV, Satellite TV or terrestrial TV networks. Further (though not shown separately), content providers 101 may distribute still and video image content via tangible storage products, such as DVD's and Blu-ray discs that users can purchase or rent and use in their computing devices 204, 206. Further, content delivered by conventional television signals (as well as cable and satellite) may be recorded and replayed using digital video recorders, such as Tivo® devices. Users may view such content on content consumption devices like mobile devices 1 18, personal computers (204, 206) or televisions (not shown) . If users indicate a portion of a video image to be of interest, such as by using a pointing device like a computer mouse or touchscreen display, their computing device 1 18, 204, 206 may generate a Product Query message that may be transmitted to a transaction module 1 10 via the Reverse Link Data Network 208. The Reverse Link Data Network may be a wired network (e.g., DSL or cable) or wireless network (e.g., 3G or WiFi) that offers Internet capability. In certain implementations the Content Delivery Network and the Reverse Link Data Network may be the same network. [0045] FIG. 3 illustrates functional components that may be implemented within one or more servers functioning as the Transaction Server 112. As described above, a Transaction Server 1 12 may be coupled to a merchandise database 106 and to Wireless Network (122) or Reverse Link Data Network (208). The Transaction Server 1 12 may include a network interface 302 including circuitry configured to effect communications with external networks 122, 208. A transaction gateway module 304 may be included within the software operating on the Transaction Server 1 12 to coordinate the various transaction functions, including those of the various embodiments described herein. Also included within the functionality of the Transaction Server 1 12 may be a Product Correlation Engine 306. The Product Correlation Engine 306 may include software processes that can receive an image portion and infer information regarding products and merchandise included within such an image. The Transaction Server 1 12 may alsoinclude a Recommendation Engine 308 functionality. Such a Recommendation Engine 308 may include software processes for identifying products based on a User Profile (e.g., age, gender, income, past product purchases, etc.). The Recommendation Engine 308 also identities alternative merchandise which may appeal to a user based upon information in a Product Query message, as well as other relevant information. The Transaction Server 112 may also include a transaction engine 310 to enable or support electronic purchase transactions by computing device users. [0046] The functional components illustrated in FIG. 3 may be implemented in software within a single Transaction Server 112 or within multiple servers linked together by a local or wide area network or inter-server data connections. For ease of reference, the various embodiments are described with reference to a single Transaction Server 112; however, references to a single Transaction Server should not be construed as limiting the scope of the claims to implementations in which all transaction functionality is included within a single server device. [0047] As discussed above, the various embodiments enable users to request information regarding products seen within still and video images received from a variety of content delivery systems, with such Product Queries processed in a Transaction Server 1 12 in order to reply with a Product Information message. FIG. 4 shows process 400 that may be implemented in the various embodiments. A content delivery system, such as mobile TV broadcast network 100 or an Internet multicaster, may broadcast content in the ordinary fashion, step 402, which are received and displayed by computing devices configured to receive particular form of content, step 404. While the received image content on a computing device, a user may see a product of interest and initiate a Product Query by selecting a portion of the displayed image using any of a variety of user interface tools, step 406. For example, if the computing device is a mobile device with a touch screen display, a user may initiate a Product Query simply by touching the display with a finger tip and circling the product of interest on the screen. The computing device may be configured to receive the user's input in order to select an indicated portion of the displayed image. The computing device may generate a Product Query message including that image selection as Image Selection Information which is transmitted to the Transaction Server 1 12 via a unicast network, step 408. The Transaction Server 112receives the Product Query message and processes the image selection in a Product Correlation Engine 306 to recognize image objects contained in the image selection, step 410. As described below, a variety of different image recognition techniques may be implemented to identify or recognize particular image objects within the image selection. The recognized image objects may then be compared to product images stored within a merchandise database, step 412, to determine if there is a match, determination 414. If an image object matches or corresponds to a product within the merchandise database (i.e., determination 414 = "Yes"), information regarding the product, such as the brand name, source, and price of the product, may be transmitted to the mobile device in a Product Information message, step 416. The Recommendation Engine 308may also be used to refine product matches to fit a User Profile. If an image object does not match or correspond to a product within the merchandise database (i.e., determination 414 = "No"), the image object may be used in conjunction with other information within the Recommendation Engine 308 in order to develop an alternative product recommendation, step 418. The alternative product recommendation or recommendations may then be transmitted to the mobile device in a Product Information message, step 420. [0048] Referring to FIG. 4B, process 400 continues as the user's computing device may receive the Product Information message from the Transaction Server 1 12, generate a display of the received product information, and prompt the user to indicate whether a transaction is desired, step 432. Such a display may be generated by a user interface which may receive the user's response, step 434, and from that response determine whether the user desires to purchase a product, determination 436. If the mobile device determines that the user wants to initiate a purchase (i.e., determination 436 = "Yes"), product information received in the Product Information message may be used to process the transaction, step 438. A variety of different transactions may be enabled by the various embodiments, such as delivering a coupon to the computing device to support a purchase, step 440, conducting an electronic on-line transaction via the Internet, step 442, enabling the purchase of the product in a store by delivering driving directions, product identity information, etc., step 444, and creating a GIS information package or other reminder to prompt the user to complete the transaction at a later time, step 446.Such transactions may be supported by a transaction engine 310 within the Transaction Server 118, by a transaction engine within a merchant's server (not shown), by a client application running in the computing device, or cooperatively by a computing device client application working in cooperation with a transaction engine 310 within the Transaction Server 112 or merchant server. [0049] If the computing device determines that the user chooses not to purchase any displayed products (i.e., determination 436 = "No"), the computing device may store information contained in the Product Information message in memory for later reference by the user, step 450. For example, if the computing device is a mobile device it may store the product name and information regarding the location of stores carrying the product. Then, in the future, when the mobile device determines that the user is close by a store selling the product (e.g., based on GPS coordinates determined by a GPS receiver in the mobile device), a display may be generated to inform the user that the product may be purchased nearby. In an embodiment the Transaction Server or another server may store information about products that the user has declined to purchase in the past and use this information in preparing future purchase recommendations to avoid recommending the same product over and over. In another embodiment such information may also or alternatively be stored in the user's computing device. [0050] Examples of messages which may be passed among various components in the embodiment methods are illustrated in FIG. 5 which shows message flow diagram 500. This message flow diagram 500 is applicable to a mobile broadcast television content delivery system, but the messages in that system are representative of messages that may be exchanged in other types of content delivery systems. A mobile TV broadcast network 100 may broadcast video program content, message 502, which is received and displayed by mobile devices 1 18. A user viewing such video content may provide an input into the mobile device 118 indicating a product or a portion of the video containing a product of interest. The mobile device uses such input to generate a Product Query message, processing 504. The Product Query message is transmitted to a Transaction Server 1 12, such as via a unicast network, message 506. As described above, the Transaction Server 112 processes the image information received in the Product Query message to develop a Product Information, processing 508. The Transaction Server 112generates a Product Information message that is transmitted to the mobile device 118 such as via a unicast network, message 512. When the Transaction Server 112 sends a Product Information message 512 to the mobile device 118, it may also send a message to the merchant 124 alerting it to the Product Information so that the merchant may be prepared to respond to a user transaction request, message 513. [0051] If a user of the mobile device 118 decides to act on the Product Information (e.g., purchase one of the recommended the products), corresponding user inputs may be processed by the mobile device 1 18 to generate a transaction request message that maybe transmitted to the Transaction Server 112, message 514. The Transaction Server 112 may use the information in the transaction request message 514 to initiate a transaction with a merchant 124 by sending a transaction initiation message 516 that may be sent by a network, such as the Internet. In response, the merchant 124 may reply with a transaction response message 518 sent by the network, such as the Internet. As an example, the transaction response message 518 may include an electronic coupon for delivery to the mobile device 1 18. If the transaction response message 518 is returned to the Transaction Server 1 12, a transaction information message 520 may be forwarded by the Transaction Server 1 12 to the mobile device via the unicast network. Alternatively, the merchant 124 may respond directly to the mobile device 118, such as by transmitting a coupon or information via the unicast network, message 522. [0052] If a user of the mobile device 118 decides to purchase a product, a purchase transaction may be completed directly between the mobile device 1 18 and the merchant 124, such as by transmitting a transaction request message via a unicast network, message 524. In response, a merchant 124 may reply with a transaction response message sent via the unicast network, message 526. [0053] Periodically, the Transaction Server 1 12 may send transaction summary information to the merchant 124 such as information regarding consumer interest in certain products demonstrated in Product Query messages, as well as product recommendations sent to consumers in Product Information messages, message 528. Similarly, the Transaction Server 1 12 may send summary transaction information to thebroadcaster 100 and/or content providers (not shown in FIG. 5) since such information may be useful to their business planning and advertising revenues, message 530. [0054] The processing of the various embodiments may be illustrated by way of an example. If a user is watching the movie Terminator on a mobile device and suddenly has a desire to purchase the black leather jacket worn by Arnold Schwarzenegger, the user may highlight the portion of a video image containing the jacket, such as by circling the image portion with a finger on a touchscreen display. A mobile device processes that user input to generate a Product Query message which is transmitted to a Transaction Server within the mobile TV broadcast network (or elsewhere). This Product Query message may be transmitted via a unicast network, such as a cellular data communication network. The Transaction Server receives the Product Query message and processes the image selection in a Product Correlation Engine to recognize the particular product of interest. The Product Correlation Engine determines that the most likely product is the black leather jacket within the image selection. This information may be used to compare the black leather jacket image to available merchandise within a merchandise database. Not surprisingly, the particular black leather jacket worn by Arnold Schwarzenegger in the Terminator is no longer commercially available, so the particular product cannot be recommended for purchase. Instead, the Recommendation Engine may use information about the broadcast content (i.e., that the program is the Terminator), the user's prior purchasing behavior (e.g., information that may be stored within a database regarding the user's purchasing behavior, user account information, or information provided by the mobile device itself), and the identified product to develop a recommendation of other products that the user may be interested in purchasing. As part of developing a Product Information, the Recommendation Engine may also consider comments or additional information provided by the user as Annotation Information in the Product Query, such as jacket size, color preference, or other expressions of interest. For example, the Recommendation Engine may select two or three available black leather jacket designs that are similar to the jacket that appears in the movie. The Recommendation Engine may also recommend other merchandise, such as dark sunglasses similar to the model of worn by Arnold Schwarzenegger in the movie. The Product Information message may then be transmitted to the mobile device via a unicastnetwork, such as the cellular data communication network that carried the Product Query message. The mobile device receives the Product Information message and a client application uses the information contained therein to generate a user interface display. A user may then use the user interface on the mobile device to indicate whether any of the recommended products should be purchased. If the user chooses to purchase one of the recommended products, the mobile device and/or the Transaction Server may initiate a transaction using any of a number of known transaction methods. [0055] While the foregoing example concerned a mobile television broadcast content delivery system, the user and device operations would be similar with most other types of content delivery systems. [0056] An example embodiment method that may be implemented within a computing device to enable a user to inquire about a product shown within a video image is illustrated in FIG. 6 which shows process 600. During the display of a broadcast program, step 602, users may see a product on the screen that interests them. Using a user interface device on the computing device, such as a touchscreen display, a pointer device, or scroll keys, a user may indicate a desire to freeze the video in order to indicate the product of interest. The computing device may be configured by a client application to receive that user input and cause the video display to pause on a particular image, step 604. Pausing of the video display may be accomplished by storing a frame in display memory while either suspending reception of broadcast content or continuing to store broadcast content in memory for delayed viewing. With the image frozen on the display, the user may designate a portion of the image containing the product of interest, such as by circling the product on a touchscreen display or with a computer mouse. A client application on the computing device may receive the user image designation, step 606, such as in the form of inputs from a touch screen display, a series of key strokes on arrow keys, or inputs from a pointing device (e.g., a computer mouse or touchpad). [0057] The client application may be configured to interpret the user inputs as selecting a portion of the displayed image and store coordinates of the selected portion in a manner that can be communicated to a Transaction Server 112. For example, the client application may record the particular pixels selected by the user as image data. Asanother example, the client application may record boundary coordinates of the image selection relative to particular coordinate axes, such as a corner of the video image. As a further example, the client application may record the pixel numbers encompassed within the image selection but not the image data. In a further example embodiment, the client application may record an image identifier and coordinates of a user touch to the image, such as the frame number of the touched image and the pixel or distance coordinates (e.g., X and Y distances from a coordinate axis like a corner). Other methods may also be used for identifying the location of user touch or an area encircled within a user designated image selection in a manner that will enable the Transaction Server to obtain the image from an accessible database and determine the portion of the image selected by the user. For ease of reference, any data that identifies a location or area of an image selection or includes the selected image data are collectively referred to herein as Image Selection Information. It is noted that Image Selection Information is information specific to an image and is not intended to encompass information appended or linked to an image such as a hyperlink. [0058] The client application may also generate a prompt on the display prompting the user to provide additional input regarding the Product Query. For example, the user may be prompted to type in a description or quantity for the desired product, such as a size, color, or number of units the individual is interested in purchasing. Additionally, the prompt may invite the user to include product descriptors, such as "jacket," "sweater," "surfboard," etc. Further, the prompt may invite the user to provide user input as spoken words which the mobile device may record for inclusion in the Product Query message. The more information provided by the user in step 608 the better the Transaction Server 1 12 may be able to identify the product of interest and provide relevant Product Informations to the user. For example, the user may be prompted to speak a Product Query, such as "how much is that sweater?" Such additional input from the user may enable the Transaction Server 112 to determine that the user is interested in a sweater and that the user wants to know the price. As another example, a user may say "does that jacket come in size extra large?" Such information would help the Transaction Server 1 12 determine that the user is interested in the jacket within the image selection and determine the user's desired size.[0059] The client application operating in the mobile device may receive the additional user input as Image Selection Information, step 610, and generate a Product Query message that includes the Image Selection Information (i.e., the information regarding the user selected portion of the video image) along with Annotation Information and any additional inputs, step 612. The Product Query message may then be transmitted by the computing device to the Transaction Server 112 via a unicast networkl22, such as the Internet or a cellular data communication network, step 614. Once the Product Query message has been transmitted, the computing device may return to displaying the broadcast content, step 618. In so doing the computing device may begin playing content stored in memory during the Product Query process so that the user continues viewing the content from the point that the image was frozen on the display. Alternatively, the computing device may simply begin receiving and displaying the content at the current point in the broadcast program. [0060] After transmitting a Product Query message, a computing device may receive a response from the Transaction Server 112 in the form of a Product Information message. An example embodiment method for receiving such a message that may be implemented within a computing device, such as in a client application running on the computing device, is illustrated in FIG. 7 which shows process 700. While the computing device is performing other tasks, such as operating within a main loop or displaying a broadcast program, step 702, a Product Information message may be received and processed by the computing device, step 704. A client application operating in the computing device may parse the received Product Information message and generate a display using the included information that prompts the user about conducting a purchase, step 706. This process may involve informing the user that a Product Information message has been received and prompting the user to indicate whether the current activity (e.g., as viewing a broadcast program) should be interrupted to view the contents of the message, determination 708. If the user response to such a prompt indicates that the current program should not be interrupted (i.e. determination 708 = "No"), the computing device may be configured to store the received Product Information for presentation to the user at a later time, step 709. When the user indicates that the Product Information should be displayed, the computing device may generate a display based on the received messagecontent, step 710. For example, the Product Information message may include a hypertext script, such as HTML or XML, which causes the computing device to generate a display as defined by the Transaction Server 1 12. As another example, the Product Information message may include data and images in a format that a client application operating on the computing device can use to generate a suitable display. As part of the display, user interface menu options may also be presented to enable the user to select an option conducting a transaction for one or more products listed in the display. [0061] When the computing device receives a user response to the purchase display prompt, step 712, it may determine whether the user indicates that a purchase should be initiated, determination 714. If the user indicates that no purchase should be initiated (i.e., determination 714 = "No"), the computing device may be configured to store the product information from the Product Information message in memory, step 716, and return to the operation underway before the message was received, returning to step 702. For example, the computing device may be configured to store information regarding the recommended product(s) and sources (e.g., stores where such products may be purchased) in memory so that the user can recall such information or the computing device may later remind the user about the availability of such products, such as when the user is within the vicinity of a store selling the product. [0062] If the user indicates that a purchase should be initiated (i.e., determination 714 = "Yes"), the computing device may generate another user interface display providing the user with options for conducting such a transaction, step 718. A transaction may be accomplished using any known electronic or conventional purchase transaction method. The display of transaction options may list alternative ways that the user may initiate a transaction, or alternatively begin a transaction according to previously selected transaction process. [0063] In a first embodiment transaction method, a user may opt to receive a coupon that entitles the user to a discounted price for the particular product, in which case the user may transmit a request for such a coupon in a message addressed to the Transaction Server 112 or the merchant, such as a request to access a website addressed to the merchant URL, step 720. Such a message may be any form of addressable message, suchas SMS, e-mail or a TCP/IP message addressed to a URL, and may be transmitted via a unicast wireless network, such as the Internet, a cellular data communication network or a WiFi network. The address for a coupon request message may be included within the Product Information message, and the computing device may be configured to use that such address information when generating and transmitting a coupon request message. The computing device may further be configured to receive a coupon from a merchant and store the coupon in memory for use in a later transaction, step 722. Such a merchant coupon may be transmitted via the unicast network used to send the coupon request message or maybe transmitted by another content delivery system. For example, methods and systems for transmitting coupons to mobile devices via mobile TV broadcast transmissions are described in U.S. Patent Application No. 12/417,493 entitled "Systems and Methods for Distributing and Redeeming Credits on a Broadcast System" filed April 2, 2009, the entire contents of which are hereby incorporated by reference. [0064] In a second embodiment transaction method, a user may opt to access a merchant website in order to conduct an online transaction, in which case the computing device may initiate an Internet connection or a data call to the merchant's URL, step 724. The merchant URL may be included within the Product Information message, and the computing device may be configured to use the provided URL when initiating the merchant server access. Once an online connection is established to a merchant server, the user may complete an online purchase transaction in an ordinary manner. [0065] In a third embodiment transaction method, a user may opt to receive information regarding stores or merchants that carry the product of interest, in which case the computing device may be configured to transmit a request for such information to the merchant URL, step 726. Such a message may involve accessing the merchant website so that the user can obtain more information about the product and the merchant as well, as identify nearby store locations and request driving directions. Alternatively, the v device may format a data request message specifying the information desired in a format that can be processed by the merchant server. The computing device may then receive and store the product and merchant information, step 728. For example, the computing device could store a website image downloaded from the merchant URL. As anotherexample, the computing device may receive and store an electronic brochure regarding the product or the merchant. [0066] In a fourth embodiment transaction method, a user may opt to retrieve store location information for stores or merchants that carry the product of interest, in which case the computing device may be configured to transmit a location request message to the merchant URL, step 730. The computing device may format a data request message specifying the location information desired in a format that the merchant server can process. The computing device may then receive and store the received store location information, step 732. For example, the merchant location information may be in the form of GPS coordinates or geographic information service (GIS) data that a mobile device can implement in a navigation or GIS application that can assist the user in locating the nearest store offering the particular product for sale. [0067] As mentioned above, the Transaction Server 112 may receive and automatically process Product Query messages received from computing devices. An example embodiment method by which a Transaction Server 112 may respond to a Product Query message is illustrated in FIG. 8 which shows process 800. In process 800 the Transaction Server 112 may receive a product query message via a unicast network, such as the Internet, a local area network, a cellular data communication network, or a combination of two or more such networks, step 802. For example, a mobile device 118 may transmit a product query message via a cellular data communication network 122 to a mobile TV broadcast network 100 which forwards the message via a local area network 102 to the Transaction Server 112. The Transaction Server 1 12 may parse the received product query message to obtain the portion of the broadcast image selected by the user and provide this image selection to a Product Correlation Engine, step 804. The Transaction Server 112 may also parse the received Product Query message to obtain and process any Annotation Information provided by the user, such as text or a verbal recording, step 806. If Annotation Information received from the user is in the form of a sound recording of a voice, the Transaction Server 112 may process the verbal comments in a voice-recognition software module. Recognized or written Annotation Information may be parsed and analyzed to recognize words that may be useful in interpreting the Product Query, such as product nouns, adjectives and numbers. Such processing mayinclude analyzing the user comment in the context of the received image, the context of the broadcast program, and previous user transactions. [0068] The received image selection may be processed in a Product Correlation Engine to recognize outlines of objects and parse the image into recognized objects, step 808. A variety of known image processing methods may be used to identify objects within the image selection that may be products and ignore elements within the image selection that are not relevant to a Product Query, such as background scenery, human features and common structures. Image objects may be further recognized by comparing recognized outlines to a database of standard or known objects. Further, objects may be recognized by comparing recognized outlines to a database of images of known product configurations. Still further, objects may be recognized by comparing recognized outlines to a database of image of products known to be present in the broadcast content, such as images related to product placement advertising. A database of images related to product placement advertising may be obtained from the content provider. [0069] Identified image objects may be compared to known patterns of products in order to further correlate objects within the image selection with purchasable products, step 810. If the image selection includes multiple image objects which compare favorably with known product patterns or one or more image objects compares favorably to a plurality of different product patterns, the Product Correlation Engine may use information obtained from user's Annotation Information to select a product of interest to the user, step 812. For example, if an image selection includes the torso of an actor wearing a leather jacket and sunglasses, the Product Correlation Engine might identify the leather jacket and the sunglasses as potential product image objects. To select the image objects of most interest to the user, the Product Correlation Engine (or other module within the Transaction Server 1 12) may determine whether the user's Annotation Information referred to a jacket, sunglasses, a clothing size, color or a style that would further clarify the user's interest. Thus, if the Annotation Information included "in size 44," the Product Correlation Engine (or other module within the Transaction Server 112) may conclude that the user is interested in the leather jacket, and thus select one image object (i.e., the leather jacket) of most likely interest to the user in step 812.[0070] Information regarding a most likely image object may then be processed in a Product Correlation Engine (or other module within the Transaction Server 1 12) in order to formulate a Product Information message, step 814. A Product Correlation Engine may compare the selected image object to a database of available merchandise, step 816. In doing so, the image object may be compared to images of available merchandise stored within a merchandise database. As part of this process, the Product Correlation Engine may also consider the user Annotation Information, especially regarding style, size, color and other distinguishing characteristics that may be matched to available merchandise in order to better address the user's Product Query. In conducting such a comparison, the Product Correlation Engine may determine whether the image object of interest to the user matches or corresponds to a particular merchandise product available in the marketplace, determination 820. This determination may be limited to the merchandise of a particular one or few merchants or suppliers, or maybe practically unlimited encompassing any product available or mentioned on the Internet (e.g., a Google search on matching products). [0071] It should be appreciated that image processing, product recognition and comparisons to available merchandise may be accomplished in a single process, such as be comparing the image selection to a database of images of available products or to a database of images of product placement merchandise. [0072] If a match or correspondence between the selected image object and a product or products available in the marketplace is found (i.e., determination 820 = "Yes"), the Recommendation Engine may recall information regarding the matched merchandise and available sources (e.g., suppliers and merchandisers) from a merchandise database, step 822. The recalled information may be used to generate a Product Information message, step 824, which is transmitted to the computing device, step 826. The Product Information message may be transmitted to the computing device in any known unicast method, such as SMS, e-mail, or TCP/IP data message. Alternatively, a Product Information message may be broadcast in a format that can only be processed by the destination computing device, such as encrypted or tagged in a manner that the destination computing device can receive and process and other computing devices will ignore.[0073] The Transaction Server 1 12 may also record information regarding transmitted Product Information messages, step 828, such as maintaining a database of Product Queries and corresponding Product Information messages transmitted to particular computing devices. Such information may be of value to manufacturers and merchandisers, as well as content providers. For example, such information may be used for future product placement guidance. Further, maintaining a database of Product Information messages may facilitate completing corresponding purchase transactions. The Transaction Server 112 may also inform a merchant when a Product Information message is transmitted so that the merchant can be prepared to receive or recognize a subsequent purchase transaction by the particular mobile device, optional step 830. In addition to enabling the merchant to complete a transaction, such information may be useful to merchants for advertising and market research purposes. Additionally, the Transaction Server may inform the provider of the broadcast content that a user has expressed an interest in a product and a Product Information message has been sent, optional step 832. Content providers may find such information to be useful for generating advertising revenue and developing sponsors for their programs. Once a Product Information message has been transmitted and saved, and interested parties have been informed, the process for responding to a Product Query message may end, step 834. It should be noted that merchants and content providers may not be sent information regarding a particular transaction at the time the transaction is conducted. Instead, such information may be sent periodically, such as once daily, weekly or monthly and in a summary format, since the merchants and content providers may be more interested in general trends and consumer responses than particular transaction details. [0074] If no match or correspondence is found between the image object and a product or products available in the marketplace (i.e., determination 820 = "No"), the Recommendation Engine may use information regarding the identified image object, the user comments, and other available and relevant information in order to recommend one or more alternative products that may be of interest to the user, step 836. In identifying a recommended alternative product, the Recommendation Engine may consider a variety of sources of information that may provide insights regarding the user's interests, such asthe nature or genre of the broadcast program, products or styles associated with actors within the broadcast program, the particular user's prior purchasing history, the user's demographic information, etc. For example, if a particular product is no longer commercially available, the Recommendation Engine (or other module within the Transaction Server 1 12) may identify a later model or similar product that is commercially available. Such alternative product recommendations, as well as information regarding the unavailability of the indicated product, may be used in generating the Product Information message, steps 822, 824. [0075] Even if a match or correspondence is found between and a product or products available in the marketplace (i.e., determination 820 = "Yes"), the Recommendation Engine may also use information regarding the matched product, the user's Annotation Information, the User Profile, and other available and relevant information in order to recommend one or more additional products that may be of interest to the user, step 836. In identifying recommended additional products, the Recommendation Engine may also consider a variety of sources of information that may provide insight into the user's interests, such as the nature or genre of the broadcast program, products or styles associated with actors within the broadcast program, the particular user's prior purchasing history, the user's demographic information, etc. For example, if the identified and matched product is an article of clothing being worn by an actor in the broadcast program, the Recommendation Engine (or other module within the Transaction Server 1 12) may identify other products also being worn by the actor, such as sunglasses or a hat. As another example, the Recommendation Engine may recommend products associated with or related to the program or program genre. As a further example, the Recommendation Engine may recommend products based upon prior purchases made by the user, such as refills or additional quantities. Such additional product recommendations, as well as matched product information may be used in generating the Product Information message, steps 822, 824. [0076] While the embodiment process 800 illustrated in FIG. 8 uses image recognition processing to identify products of interest, other methods may be used for correlating a Product Query message to particular merchandise. In an alternative embodiment illustrated in FIG. 9, product placement information supplied by the content providerregarding particular products placed within specific image frames and locations may be used in conjunction with image recognition processing in order to identify commercially available products of interest to the user. This embodiment method may employ processes similar to those described above with reference to FIG. 8 for like numbered steps. Once image objects are recognized within the image selection received in the Product Query message, step 808, the Product Correlation Engine may compare such image objects to product placement information supplied by the content provider and/or stored in a product placement database, step 902. It should be appreciated that the processes of step 808 and 902 may be accomplished in a single process. For example, if a content provider makes available a database of images of advertising products as the products appear in the video content, recognizing image objects and matching image objects to products can be accomplished by an image comparison algorithm comparing the image selection to the images in the database. [0077] When multiple image objects compare favorably to multiple product placements, or one image object compares favorably to multiple product placements, the Product Correlation Engine may compare the product matches to user Annotation Information to select a most likely product or products of interest, step 904. These comparisons to product placement information may be used to determine whether there is a match or correspondence between the user image selection and product placements, determination 906. If an image object is matched to a product placement (i.e., determination 906 = "Yes"), this information may be used to generate the Product Information message, steps 822, 824, which is transmitted to the inquiring computing device, step 826. [0078] If there is no match between the image object and products placed within the broadcast content (i.e., determination 906 = "No"), the image object and other information may be provided to the Recommendation Engine (or other module within the Transaction Server 112), step 908, to develop a Product Information. The Recommendation Engine may compare the image object to a database of available merchandise, step 816, in a manner similar to that described above with reference to FIG. 8. The Recommendation Engine may also use the image object and other available information to recommend alternative or additional products, step 836, as described above with reference to FIG. 8.[0079] Since product placement advertising is an increasingly important form of advertising and content providers can know for certain the products that appear in the video content, content providers may supply a transaction server 112 with more detailed information regarding product placements. The example of a product placement image database that can be used for image comparison algorithm is mentioned above with reference to FIG. 9. In a further example, the product placement information may be supplied in terms of an image identifier, such as frame numbers (or image time stamps), and product locations within each frame (e.g., image coordinates). If a content provider supplies such product placement information, the transaction server 112 may not need to conduct image recognition processing. Further, the Product Query message generated by a mobile device need not include a portion of the image, since the indicated image coordinates (e.g., frame number and relative location coordinates) is all the information needed to identify a product placed in the video content. Thus, the information regarding a portion of a broadcast video image included in a Product Query message may be an image selection (i.e., image data) or a location (e.g., frame number and coordinates) within an image. [0080] An example method for generating Product Information messages utilizing detailed product placement information to obviate the need for image processing is illustrated in FIG. 10 shows process 1000. In this example process 1000, a Product Query message received in step 802 may be parsed to obtain the image portion and information regarding the image frame number (or similar information), step 1002. Thus, in this embodiment, the mobile device may transmit information regarding an image quadrant or coordinates within an image along with the particular frame number or broadcast timestamp of the image on which the user indicated a product interest. For example, in this embodiment, a user may indicate a product of interest simply by freezing the broadcast of a particular image and touching the desired product on a touchscreen or "clicking" on the product with a pointer device (e.g., a computer mouse). The mobile device may format a Product Query message which contains the image coordinates of the user's touch or click along with the frame number or time stamp of the particular image. The user may also provide comments which the transaction server 1 12 may receive and parse, step 806, in a manner similar to that described above withreference to FIG. 8. The transaction server 1 12 may then use the image touch coordinates and frame number (or timestamp) to search a product placement database, step 1004, to determine whether an advertised product was placed in the particular frame at the indicated touch coordinates or image portion, determination 1006. If a product was placed in the particular frame at the indicated location (i.e. determination 1006 = "Yes"), the matched or corresponding product may be used to generate a Product Information message, steps 822, 824, in a manner similar to that described above with reference to FIG. 8. If no product was placed in the particular frame at the indicated location (i.e. determination 1006 = "No"), information regarding the broadcast content, the user's comments and the user's purchase history may be used to recommend alternative products that may be of interest to the user, step 1008. Even if the user indication matches or corresponds to a placed product (i.e., determination 1006 = "Yes"), information regarding the matched or corresponding product, the content type, the user's comments and the user's purchase history may be used to recommend additional products that may be of interest to the user, step 1008. Such alternative or additional products may be used to generate the Product Information message, steps 822, 824, in a manner similar to that described above with reference to FIG. 8. [0081] Another embodiment is illustrated in FIG. 11 which shows process 1100 which uses human image recognition processes in order to better match selected images to products of likely interest to a user. In this embodiment, user image selections and comments parsed from received product query messages, steps 802, 804, may be transmitted to an organization which employs people to look at image selections and send back messages regarding recognized objects, step 1102. It is well-known that human beings can recognize objects in images far better and faster than any known computer process. Thus, a service provider could receive image queries and pass received images to operators who can look at the images on workstations and type in a description or product name of products recognized by the operators. Such operators could be employees of the service provider, members of an online community or users of a computer game that includes image matching/selection as a recreational activity, or workers of an open market platform, being compensated on a per job performance. Such operators may simply identify recognized objects, such as "jacket" or "sunglasses".Alternatively, operators may be supported by a visual database so that they may further identify product suppliers and models. In this manner, humans may recognize specific products and provide identifiers that can be used to locate such products in the marketplace. [0082] A Transaction Server 112 may await the response from such operators, step 1 104, and when such a response is received, step 1 106, compare the received product information to a database of available merchandise, step 1 108, to determine if there is a match or correspondence, determination 1 110. If the recognized product matches or corresponds to available merchandise (i.e., determination 11 10 = "Yes"), this information may be used to generate the Product Information message, steps 822, 824, in a manner similar to that described above with reference to FIG. 8. If there is no match between the recognized product and available merchandise (i.e., determination 11 10 = "No"), a Recommendation Engine may use the product type received from the operator along with information regarding the broadcast content type, user comments and the user's purchase history to recommend alternative products that may be of interest to the user. Even if the recognized product matches or corresponds to available merchandise (i.e., determination 1 110 = "Yes"), a Recommendation Engine may use the matched product or corresponding product along with information regarding the broadcast content type, user comments and the user's purchase history to recommend additional products that may be of interest to the user. Such alternative or additional products may be used to generate the Product Information message, steps 822, 824, in a manner similar to that described above with reference to FIG. 8. [0083] As mentioned above, one method of facilitating a transaction involves transmitting a coupon for a recommended product to the computing device which the user can redeem at the time of purchase. FIG. 12 shows process 1200 which illustrates an embodiment method that may be implemented within a Transaction Server 112 or merchant server (both of which are referred to here as the "receiving server") to accomplish such a coupon delivery. A receiving server may receive a coupon request message from a computing device, step 1202, such as via a unicast network. The receiving server may parse the message to obtain the product data as well as an identifier of the requesting computing device, step 1204. The receiving server may use the productdata to obtain further information regarding the product from a merchandise database, step 1206. The receiving server may further determine a coupon type based upon the obtained product data, the computing device identifier, the user's purchase history (e.g., a record of past transactions with the merchant or merchants of the user associated with the identified mobile device), the User Profile, and other information, step 1208. Using this information, the receiving server may generate the coupon, step 1210, which may be in the form of an encrypted data message which includes sufficient information for a merchant to receive the coupon from a user and credit the user for the value of the coupon. The structure and content of electronic coupons are well-known, examples of which are included in U.S. Patent Application No. 12/417,493 previously incorporated by reference. The generated coupon may then be transmitted to the computing device, step 1212. Coupons may be transmitted via a unicast network, such as the Internet or a cellular data communication network, or by a broadcast network with appropriate packaging of the coupon information to affect delivery to particular computing devices. The coupon information may also be stored within the Transaction Server 112 for future reference, step 1214, and/or transmitted to a merchant server for use in completing a transaction, step 1216. [0084] For a variety of reasons, a Transaction Server 1 12 may be configured to keep a record of completed transactions resulting from responses to Product Query messages. Such information may be very valuable for merchandisers and content providers as records of actual transactions prompted by the broadcast content, product placements and the services provided by the various embodiments. FIG. 13 shows process 1300 which illustrates an embodiment method for tracking such transaction information. When a transaction is initiated, whether online or in a physical storefront, information regarding the computing device and an identifier of a particular Product Information may be obtained from the purchaser's computing device, step 1302. For example, if the purchase involves the process of using an electronic coupon stored within the computing device, that coupon may include an identifier associated with the Product Information that resulted in the coupon. The process of transmitting the coupon to the point-of-sale system may communicate the computing device identifier (e.g. telephone number, MAC, or other device ID). Having received that information, the point-of-sale system maycomplete the transaction, step 1304, and transmit the identifiers associated with the Product Information (or other transaction-related identifiers) to the Transaction Server, step 1306. The point-of-sale system may also store the completed transaction information in memory, step 1308. A Transaction Server 1 12 may correlate information regarding completed transactions and corresponding Product Information messages in order to improve the processing of the Recommendation Engine, such as by implementing learning algorithms, optional step 1310. Additionally, the Transaction Server 1 12 may store the transaction information within a user purchase history database for use in responding to future Product Query messages, optional step 1312. Additionally, the Transaction Server 112, or an owner of that server, may wish to inform content providers of purchase statistics, and thus the advertising effectiveness of product placement advertising, optional step 1314. [0085] While the foregoing description of users selecting products within broadcast content referred to non-advertising content, the embodiments and processes may apply equally well to advertising (i.e., commercial) content. Thus, if a user submits a Product Query including an image from a commercial, the various embodiments will perform in a similar manner to result in a Product Information message related to the commercial. Thus, the various embodiments may provide a direct purchasing option for broadcast commercials. [0086] As mentioned above, an embodiment method for completing a transaction may include transmitting information regarding a merchant or store where the product of interest may be purchased. Fig. 14 shows process 1400 which illustrates an example method for implementation in a mobile device for accomplishing such a transaction. As described above with reference to FIG.7, a mobile device may receive location information for a store or merchants in a Product Information message or in messages received in response to a purchase initiation from the mobile device (see steps to 718, 730, 732). Referring to FIG. 14, in process 1400 a mobile device may be configured to operate with a main loop, step 1402, which schedules a normal processing within the mobile device. As part of the main loop, the mobile device may periodically obtain GPS coordinates from a GPS receiver circuit within the mobile device, step 1404. A mobile device processor may compare the obtained GPS coordinates to the location datareceived in response to Product Queries that is stored in memory, step 1406. In doing this comparison, the processor may determine whether the mobile device is currently located close by a merchant or store that has been identified as a source for a product of interest in the past, determination 1408. If the mobile device is not close to any merchants or store locations stored in memory (i.e., determination 1408 = "No"), the mobile device processor may return to the main loop, step 1402. However, if the mobile device processor determines that its current location is within a predetermined distance of a store or merchant location stored in memory (i.e., determination 1408 = "Yes"), the processor may recall the data records stored in memory associated with that particular product of interest, step 1410. Using the information recalled from memory, the processor may generate a display or alert to inform the user that a source of a product of interest is nearby, step 1412. As part of this display, the processor may generate a prompt to enable the user to indicate whether the information regarding the source for the product should be kept or deleted, determination 1414. If, in response to such a prompt, the user indicates that the data record associated with that product should be deleted (i.e. determination 1414 = "delete"), the mobile device processor may delete the corresponding data record from memory, step 1416, and return to the main loop, step 1402. If the user does not choose to delete the data record (i.e., determination 1414 = "keep"), the processor may generate a display enabling the user to indicate whether a coupon for the particular product should be displayed, determination 1418. If the user indicates that the coupon should not be displayed (i.e., determination 1418 = "No"), this may indicate that the user is not interested in purchasing the product at this time, and therefore the processor may return to main loop, step 1402. If the user requests that the coupon be displayed (i.e., determination 1418 = "Yes"), the processor may recall the coupon information stored in memory and generate a display of the coupon or otherwise prepare an electronic coupon for redemption, step 1420. Once the coupon redemption has been accomplished, the processor may return to the main loop, step 1402. [0087] In the alert generated in step 1412, the processor may provide the location information of the merchant or store to a navigation or GIS application so that the application can provide the user with driving or walking directions. The alert may also include any other information provided by the merchant or the Transaction Server 112that may facilitate a purchase transaction, such as store hours, a store telephone number, an advertising display, a listing of additional products that may be of interest to the user carried by the store, and any other similar marketing or useful information. [0088] Typical mobile devices 1 18 suitable for use with the various embodiments will have in common the components illustrated in FIG. 15. For example, an exemplary mobile device 1 18 may include a processor 1501 coupled to internal memory 1502, and a display 1503. Additionally, the mobile device 1 10 may have an antenna 1504 for sending and receiving electromagnetic radiation that is connected to a wireless data link and/or cellular telephone transceiver 1505 coupled to the processor 1501 and a mobile TV broadcast receiver 1508 coupled to the processor 1501. Mobile devices typically also include a key pad 1506 or miniature keyboard and menu selection buttons or rocker switches 1507 for receiving user inputs. Also, mobile devices typically also include a speaker 1510 coupled to the processor 1501 for producing sound, and a microphone 1512 coupled to the processor 1501 for recording sound, such as a user's voice. [0089] In some mobile devices 118 global positioning system (GPS) receiver circuitry 1509 may be coupled to the processor 1501 and to the antenna 1504. In some implementations, the GPS receiver circuitry 1509 may be incorporated within a part of the wireless transceiver 1505 as illustrative. In other implementations, the GPS receiver circuitry may be a separate module coupled to the processor 1501. [0090] The embodiments described above may also be implemented on any of a variety of computing devices, such as a notebook computer 260 illustrated in FIG. 16. Such a notebook computer 260 typically includes a housing 266 that contains a processor 1601 coupled to volatile memory 1602 and a large capacity nonvolatile memory, such as a disk drive 1603. The computer 260 may also include a transceiver 1605 coupled to the processor 1601 that is configured to communicate with a network, such as the Internet. The transceiver 1605 may be a wireless transceiver configured to couple with a wireless communication network, such as a cellular data network or a wireless wide area network (e.g., WiFi). Alternatively or in addition, the transceiver 1605 may include modem circuitry for coupling to a wired network 1615, such as a connection to the Internet. For ease of reference each of the alternative types of receiver, modem and transceiver thatmay be implemented within a computer 260 for receiving content from a communication network are referred to generally as transceivers. The computer 260 may also include a a mobile TV broadcast receiver 1610 coupled to the processor 1601 and to an antenna (not shown) for receiving mobile broadcast television signals. The computer 260 may also include a floppy disc drive 1604 and/or a compact disc (CD) drive 1605 coupled to the processor 1601. The computer housing 1606 typically also includes a touchpad 1607, keyboard 1608 and display 1609. The software instructions configuring the processor 1601 may be stored on any form of tangible processor-readable memory, including: a random access memory 1602, hard disc memory 1603, a floppy disk, a compact disc (readable in a compact disc drive 1604), electrically erasable/programmable read only memory (EEPROM), read only memory (such as FLASH memory), and/or a memory module (not shown) plugged into the computing device 260, such as an external memory chip or a USB-connectable external memory (e.g., a "flash drive") plugged into a USB network port. [0091] The embodiments described above may be implemented with any of a variety of general purpose computers or server devices, such as the server 1700 illustrated in FIG. 17. Such a server 1700 typically includes a processor 1701 coupled to volatile memory 1702 and a large capacity nonvolatile memory, such as a disk drive 1703. The server 1700 may also include a floppy disc drive and/or a compact disc (CD) drive 1706 coupled to the processor 1701. The server 1700 may also include network access ports 1704 coupled to the processor 1701 for communicating with a network 1705, such as the Internet. [0092] The processors in the various computing devices 1501 , 1601, 1701 may be any programmable microprocessor, microcomputer or multiple processor chip or chips that can be configured by software instructions (applications) to perform a variety of functions, including the functions of the various embodiments described herein. In some mobile devices, multiple processors 1501 , 1601 , 1701 may be provided, such as one processor dedicated to communication functions and one processor dedicated to running other applications. Typically, software applications may be stored in the internal memory 1502, 1602, 1702 before they are accessed and loaded into the processor 1501, 1601, 1701. In some mobile devices, the processor 1501 may include internal memorysufficient to store the application software instructions. In some computing devices, the memory may be in a separate memory chip coupled to the processor 1501 , 1601, 1701. In many computing devices 118, 206, the internal memory 1502 may be a volatile or nonvolatile memory, such as flash memory, or a mixture of both. For the purposes of this description, a general reference to memory refers to all memory accessible by the processor 1501, 1601, 1701 , including internal memory 1502, 1602, 1702, removable memory plugged into the computing device, and memory within the processor 1501 , 1601, 1701 itself. [0093] The foregoing method descriptions and the process flow diagrams are provided merely as illustrative examples and are not intended to require or imply that the steps of the various embodiments must be performed in the order presented. As will be appreciated by one of skill in the art the order of steps in the foregoing embodiments may be performed in any order. Words such as "thereafter," "then," "next," etc. are not intended to limit the order of the steps; these words are simply used to guide the reader through the description of the methods. Further, any reference to claim elements in the singular, for example, using the articles "a," "an" or "the" is not to be construed as limiting the element to the singular. [0094] The various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention. [0095] The hardware used to implement the various illustrative logics, logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor(DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but, in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Alternatively, some steps or methods may be performed by circuitry that is specific to a given function. [0096] In one or more exemplary aspects, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. The steps of a method or algorithm disclosed herein may be embodied in a processor-executable software module executed which may reside on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that may be accessed by a computer. By way of example, and not limitation, such computer-readable media may comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to carry or store desired program code in the form of instructions or data structures and that may be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers.Combinations of the above should also be included within the scope of computer- readable media. Additionally, the operations of a method or algorithm may reside as one or any combination or set of codes and/or instructions on a machine readable medium and/or computer-readable medium, which may be incorporated into a computer program product. [0097] The preceding description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the following claims and the principles and novel features disclosed herein. |
A ball grid array package device includes a substrate with a copper ball grid array pad formed on the substrate. A nickel layer may be formed on the copper pad and a tin layer formed on the nickel layer. The nickel layer may be formed using an electroless nickel plating process. The tin layer may be formed using an immersion tin process. In some cases, silver may be used instead of tin and formed using an immersion silver process. |
1.A ball grid array packaging device includes:Substratea copper pad formed on the substrate;a nickel layer formed on the copper pad; andA tin layer is formed on the nickel layer.2.The apparatus of claim 1 wherein said nickel layer is an electroless nickel layer.3.The apparatus of claim 1 wherein said tin layer is a immersion tin layer.4.The apparatus of claim 1 further comprising a solder resist film formed on said substrate on or around an edge of said copper pad.5.The apparatus of claim 1 wherein said nickel layer has a thickness of between about 5 microns and about 10 microns.6.The apparatus of claim 1 wherein said tin layer has a thickness of between about 1 micron and about 5 microns.7.The apparatus of claim 1 wherein said nickel layer inhibits intermetallic diffusion between said tin layer and said copper pad.8.The apparatus of claim 1 wherein said substrate comprises a buried oxide layer.9.The apparatus of claim 1 wherein said tin layer is bondable to a lead-free solder during use.10.The apparatus according to claim 1, wherein said copper pad, said nickel layer and said tin layer are formed according to a structure of a CAD (Computer Aided Design) design.11.A ball grid array packaging device includes:Substratea copper pad formed on the substrate;a nickel layer formed on the copper pad; andA silver layer is formed on the nickel layer.12.The apparatus of claim 1 wherein said nickel layer is an electroless nickel layer.13.The apparatus of claim 1 wherein said silver layer is a immersion silver layer.14.The apparatus of claim 1 wherein said nickel layer has a thickness of between about 5 microns and about 10 microns.15.The apparatus of claim 1 wherein said silver layer has a thickness of between about 1 micron and about 5 microns.16.The apparatus of claim 1 wherein said nickel layer inhibits intermetallic diffusion between said silver layer and said copper pad.17.The apparatus of claim 1 wherein said substrate comprises a buried oxide layer.18.The device of claim 1 wherein said silver layer is bondable to lead-free solder during use.19.The apparatus according to claim 1, wherein said copper pad, said nickel layer, and said silver layer are formed according to a structure of a CAD (Computer Aided Design) design.20.A ball grid array package manufacturing process, comprising:Forming a copper ball grid array pad on the substrate;Forming a solder resist film on the substrate around the copper pad;Forming a nickel layer on the copper pad; andA tin layer is formed on the nickel layer.21.The process of claim 20 further comprising bonding a lead-free solder to said tin layer.22.A ball grid array package, wherein at least one of the ball grid array pads comprises:a copper pad formed on the substrate;a nickel layer formed on the copper pad; andA silver layer is formed on the nickel layer.23.A computer readable storage medium storing a plurality of instructions that, when executed, produce a ball grid array package comprising:a copper pad formed on the substrate;a nickel layer formed on the copper pad; andA silver layer is formed on the nickel layer.24.A computer readable storage medium storing a plurality of instructions that, when executed, produce a process comprising:Forming a copper ball grid array pad on the substrate;Forming a solder resist film on the substrate around the copper pad;Forming a nickel layer on the copper pad; andA tin layer is formed on the nickel layer. |
Alternative surface treatment for flip chip ball grid arraysBackground of the inventionTechnical fieldThis invention relates generally to structures used in ball grid array packages and, more particularly, to surface treatment of ball grid array pads.Description of related fieldsBall Grid Arrays (BGAs) are packages that are widely used to surface mount an integrated circuit (IC) to a printed circuit board (PCB). One variation of the BGA that can be used is the Flip Ball Grid Array (FCBGA). The BGA package typically has a copper pad pattern on top of the IC substrate surrounded by the solder mask. Solder (eg, solder balls) is placed on top of the copper pad. The BGA is then placed on a PCB with a matching copper pad pattern. The BGA/PCB assembly is then heated to melt the solder and allow the solder to flow into the pattern before cooling the assembly to resolidify the solder.A key issue is the incorporation of solder into the copper pad. Copper is not easily integrated into most lead-free solders. To overcome the bonding problem between lead-free solder and copper, a surface treatment is provided on the copper pad to promote adhesion between the pad and the solder. Existing industrial lead-free solder finishes include organic solderable preservatives (OSP), electroless nickel/immersion gold (ENIG), electroless nickel/electroless palladium/immersion gold (ENEPIG), immersion silver and immersion tin.Figure 1 depicts a cross-sectional view of a BGA pad 100 with ENIG surface treatment. The pad 100 is a copper pad placed on the substrate 102 and surrounded by the solder resist film 104. For ENIG, a chemical nickel plating layer 106 is formed on the pad 100, followed by formation of a gold immersion layer 108 as a top layer. The thickness of the gold layer 108 is typically between about 2 microns and about 6 microns. Gold provides good electrical conductivity and surface protection. However, gold is a very expensive material and gold may add significant cost in manufacturing BGA packages compared to materials such as tin or silver.Replacing a portion of the gold with a less expensive material can reduce the cost of manufacturing a BGA package. 2 depicts a cross-sectional view of a BGA pad 100 having an ENEPIG surface treatment. For ENEPIG, an electroless palladium layer 110 is placed between the nickel layer 106 and the gold layer 108. The use of palladium allows the thickness of the gold layer 108 to be reduced to about 0.05 microns. However, palladium can still be more expensive than other conductive materials such as tin.FIG. 3 depicts a cross-sectional view of a BGA pad 100 having a immersion tin surface treatment. A immersion tin layer 112 is formed on the copper pad 100. Tin for BGA packages may be less expensive than gold and/or palladium. The tin layer 112 can be formed on the copper pad 100 using a simple coating process. The tin layer 112 provides good surface protection to the copper pad 100. However, tin and copper may be susceptible to intermetallic growth. For example, copper may diffuse into the tin during subsequent processing such as electroplating. Intermetallic propagation can degrade BGA packages over time and provide reduced package reliability.Therefore, there is a need for a copper pad surface treatment that provides low cost and long-term reliability for bonding between copper pads and lead-free solder in a BGA package. This surface treatment can also be easily fabricated and/or easily integrated into existing packaging technologies.Summary of inventionIn some embodiments, a ball grid array packaging apparatus includes a substrate having a copper ball grid array pad formed thereon. A nickel layer may be formed on the copper pad, and a tin layer may be formed on the nickel layer. The nickel layer can be formed using an electroless nickel plating process. The tin layer can be formed using a immersion tin process.In some embodiments, a ball grid array packaging apparatus includes a substrate having a copper ball grid array pad formed thereon. A nickel layer may be formed on the copper pad, and a silver layer may be formed on the nickel layer. The nickel layer can be formed using an electroless nickel plating process. The silver layer can be formed using a immersion silver process.The nickel layer can be an intermetallic diffusion barrier between the copper pad and the tin or silver layer. A tin or silver layer allows the ball grid array package device to be bonded to lead-free solder. Lead-free solders can be used to bond ball grid array package devices to, for example, printed circuit boards or printed wiring boards. In some embodiments, palladium is formed between the nickel layer and the tin or silver layer.BRIEF DESCRIPTION OF THE DRAWINGS1 depicts a cross-sectional view of a BGA pad 100 having an electroless nickel/immersion gold (ENIG) surface treatment.2 depicts a cross-sectional view of a BGA pad 100 having an electroless nickel/electroless palladium/immersion gold (ENEPIG) surface treatment.FIG. 3 depicts a cross-sectional view of a BGA pad 100 having a immersion tin surface treatment.4 depicts a cross-sectional view of an embodiment of a BGA pad 100 having a nickel/tin surface treatment.Figure 5 depicts a cross-sectional view of a BGA pad 100 having a nickel/silver surface treatment.Although the invention has been described herein in terms of several embodiments and illustrative figures, those skilled in the art will recognize that the invention is not limited to the embodiments or the drawings. The drawings and the detailed description are not intended to be limited to the invention, and the invention is intended to be limited to the scope of the invention. Equivalents and alternatives. Any headings used herein are for organizational purposes only and are not intended to limit the scope of the description or the claims. The word "may" as used herein is tolerant (i.e., meaning possible) rather than mandatory (i.e., meaning necessary). Similarly, the words "include, including, includes" are meant to include, but not limited to.Detailed ways4 depicts a cross-sectional view of an embodiment of a BGA pad 100 having a nickel/tin surface treatment. In some embodiments, pad 100 is a flip chip ball grid array (FCBGA) pad or a controllable collapse chip connection pad (C4 pad). Pad 100 is formed on substrate 102. In certain embodiments, the pad 100 is a copper pad. Substrate 102 may be, for example, a buried oxide layer substrate or other semiconductor device substrate. Solder mask 104 may be formed on substrate 102 on and around the edge of pad 100, as shown in FIG.In certain embodiments, a nickel layer 114 is formed (deposited) on the pad 100. In some embodiments, the nickel layer 114 is formed using an electroless nickel (EN) process (e.g., an autocatalytic nickel plating process) or another suitable nickel plating process. After the nickel layer 114 is formed, a tin layer 112 may be formed on the nickel layer. In certain embodiments, the tin layer 112 is formed using an immersion tin (IT) process. Thus, the nickel layer 114 and the tin layer 112 can be formed using an electroless nickel/immersion tin (ENIT) process. In some embodiments, the tin layer 112 is formed using an electroless tin (ET) process. Therefore, the nickel layer 114 and the tin layer 112 can be formed by an electroless nickel/electroless tin plating (ENET) process.The thickness of the nickel layer 114 can be selected based on factors such as, but not limited to, the thickness required to inhibit intermetallic diffusion between the copper and tin layers 112 of the pad 100 and the thickness that provides suitable electrical and/or mechanical properties to the BGA package. For example, the nickel layer 114 can have a minimum thickness required to inhibit inter-metal diffusion between the copper and tin layers 112 in the pad 100. At the same time, however, the nickel layer 114 may have a thickness that is not large enough to degrade the amount of nickel in the BGA package to the electrical and/or mechanical properties of the package. In certain embodiments, the nickel layer 114 has a thickness between about 5 microns and about 10 microns.The tin layer 112 can have at least a minimum thickness that inhibits the tin layer from being unraveled from the solder during assembly of the BGA package. Similar to nickel layer 114, tin layer 112 may have a thickness that is not large enough to potentially degrade the electrical and/or mechanical properties of the BGA package. In certain embodiments, the tin layer 112 has a thickness between about 1 micrometer and about 3 micrometers or between about 1 micrometer and about 5 micrometers.In some embodiments, silver is used as the top layer for the surface treatment. Figure 5 depicts a cross-sectional view of an embodiment of a BGA pad 100 having a silver/tin surface treatment. A silver layer 116 is formed on the nickel layer 114 above the pad 100. In certain embodiments, the silver layer 116 is formed using a immersion silver (IS) process. Therefore, the nickel layer 114 and the silver layer 116 can be formed by an electroless nickel/immersion silver (ENIS) process. In certain embodiments, the silver layer 116 is formed using an electroless silver plating (ES) process. Therefore, the nickel layer 114 and the silver layer 116 can be formed by an electroless nickel/electroless silver plating (ENES) process.For tin, the silver layer 116 can have at least a minimum thickness that inhibits the silver layer from unraveling during soldering of the BGA package. Additionally, the silver layer 116 may have a thickness that is not large enough to potentially degrade the electrical and/or mechanical properties of the BGA package. In certain embodiments, the silver layer 116 has a thickness between about 1 micron and about 5 microns.For the embodiment depicted in Figures 4 and 5, the nickel layer 114 provides a barrier that minimizes the intermetallic diffusion between the tin layer 112 or the silver layer 114 and the copper pad 100. The use of nickel to provide an intermetallic diffusion barrier allows the use of tin or silver to produce a reliable, low cost BGA package. For example, the use of tin or silver can reduce the cost by between about 10% and about 20% compared to the use of gold or gold and palladium.The use of tin layer 112 or silver layer 114 as the top layer of the surface treatment of pad 100 allows the use of flux and/or other methods to remove oxides and/or other contaminants. Removal of contaminants such as oxides prevents contaminants from adversely affecting the soldering process or bonding between the solder and the top layer of the surface treatment.In certain embodiments, the use of nickel layer 114 and tin layer 112 or silver layer 114 allows for a reduction in the thickness of copper pad 100 while maintaining the desired electrical properties. Reducing the thickness of the copper pad 100 provides greater flexibility in the design of the BGA package and reduces the cost of manufacturing the package.In some embodiments, a palladium layer can be placed between the nickel layer and the tin or silver layer. The palladium layer can be formed using, for example, an electroless palladium plating process.The BGA pad and surface treatment embodiments depicted in Figures 4 and 5 can be used in integrated circuits such as, but not limited to, a graphics processing unit (GPU) and a central processing unit (CPU). In some embodiments, the BGA pads and surface treatments depicted in Figures 4 and 5 can be used in printed circuit boards (PCBs) or printed wiring boards (PWBs).In certain embodiments, the BGA mat and surface treatment embodiments depicted in Figures 4 and 5 are structures of CAD (Computer Aided Design) design or structures formed from CAD design processes. In certain embodiments, a computer readable storage medium stores a plurality of instructions that, when executed, produce an embodiment of the BGA pad and surface treatment depicted in Figures 4 and 5. For example, the instructions can provide the steps of a process for producing the BGA pad and surface treatment embodiments depicted in Figures 4 and 5.Other modifications and alternative embodiments of the various aspects of the invention will be apparent to those skilled in the < Therefore, the description is to be construed as illustrative only, and is intended to be a It is to be understood that the form of the invention depicted and described herein is understood as the presently preferred embodiments. The elements and materials may be used in place of the elements and materials described and described herein, and the parts and processes may be reversed, and some of the features of the invention may be used independently, as will be apparent to those skilled in the art having the benefit of this description of the invention. The elements described herein may be varied without departing from the spirit and scope of the invention as described in the appended claims. |
A method and apparatus for efficient and consistent validation/conflict detection in a Software Transactional Memory (STM) system is herein described. A version check barrier is inserted after a load to compare versions of loaded values before and after the load. In addition, a global timestamp (GTS) is utilized to track a latest committed transaction. Each transaction is associated with a local timestamp (LTS) initialized to the GTS value at the start of a transaction. As a transaction commits it updates the GTS to a new value and sets versions of modified locations to the new value. Pending transactions compare versions determined in read barriers to their LTS. If the version is greater than their LTS indicating another transaction has committed after the pending transaction started and initialized the LTS, then the pending transaction validates its read set to maintain efficient and consistent transactional execution. |
A tangible machine readable medium including instructions stored thereon, which when executed, causes the machine to perform the operations of:maintaining a most recent transaction timestamp to be updated uponcommit of each of a plurality of transactions;starting a new transaction;in response to encountering a current read in the new transaction,determining if the most recent transaction timestamp has been updated from starting the new transaction to the current read;determining if a current read set for the new transaction including readsencountered from starting the new transaction to the current read are valid in response to determining the most recent transaction timestamp has been updated since starting the new transaction; andcontinuing execution of the new transaction in response to determining themost recent transaction timestamp has not been updated since starting the new transaction or determining the current read set is valid; andaborting the new transaction in response to determining the current readset is not valid.The machine readable medium of claim 1, wherein in response to encountering a current read in the new transaction, determining if the most recent transaction timestamp has been updated from starting the new transaction to the current read comprises:copying the most recent transaction timestamp to a local timestamp uponstarting the new transaction;comparing the most recent transaction timestamp to the local timestamp inresponse to encountering the current read in the new transaction; anddetermining the most recent transaction timestamp has been updated inresponse to the comparing indicating the most recent transaction timestamp is different than the local timestamp.The machine readable medium of claim 1, wherein maintaining a most recent transaction timestamp to be updated upon commit of each of a plurality of transactions comprises: incrementing the most recent transaction timestamp upon commit of each of the plurality of transactions.The machine readable medium of claim 1, wherein determining if a current read set for the new transaction including reads encountered from starting the new transaction to the current read are valid comprises:determining if logged version values for the current read set are the same as current version values associated with locations read by the read set.A method comprisingmaintaining a most recent transaction timestamp to be updated uponcommit of each of a plurality of transactions;starting a new transaction;in response to encountering a current read in the new transaction,determining if the most recent transaction timestamp has been updated from starting the new transaction to the current read;determining if a current read set for the new transaction including readsencountered from starting the new transaction to the current read are valid in response to determining the most recent transaction timestamp has been updated since starting the new transaction; and continuing execution of the new transaction in response to determining themost recent transaction timestamp has not been updated since starting the new transaction or determining the current read set is valid; andaborting the new transaction in response to determining the current readset is not valid.The method of claim 5, wherein in response to encountering a current read in the new transaction, determining if the most recent transaction timestamp has been updated from starting the new transaction to the current read comprises:copying the most recent transaction timestamp to a local timestamp uponstarting the new transaction;comparing the most recent transaction timestamp to the local timestamp inresponse to encountering the current read in the new transaction; anddetermining the most recent transaction timestamp has been updated inresponse to the comparing indicating the most recent transaction timestamp is different than the local timestamp.The method of claim 5, wherein maintaining a most recent transaction timestamp to be updated upon commit of each of a plurality of transactions comprises: incrementing the most recent transaction timestamp upon commit of each of the plurality of transactions.The method of claim 5, wherein determining if a current read set for the new transaction including reads encountered from starting the new transaction to the current read are valid comprises: determining if logged version values for the current read set are the same as current version values associated with locations read by the read set.A system comprisinga processor adapted to execute code; anda memory to hold the code, the code, when executed by the processor,adapted to perform the operations of:maintaining a most recent transaction timestamp to be updatedupon commit of each of a plurality of transactions;starting a new transaction;in response to encountering a current read in the new transaction,determining if the most recent transaction timestamp has been updated from starting the new transaction to the current read;determining if a current read set for the new transaction includingreads encountered from starting the new transaction to the current read are valid in response to determining the most recent transaction timestamp has been updated since starting the new transaction; andcontinuing execution of the new transaction in response todetermining the most recent transaction timestamp has not been updated since starting the new transaction or determining the current read set is valid; andaborting the new transaction in response to determining the currentread set is not valid.The system of claim 9, wherein in response to encountering a current read in the new transaction, determining if the most recent transaction timestamp has been updated from starting the new transaction to the current read comprises:copying the most recent transaction timestamp to a local timestamp uponstarting the new transaction;comparing the most recent transaction timestamp to the local timestamp inresponse to encountering the current read in the new transaction; anddetermining the most recent transaction timestamp has been updated inresponse to the comparing indicating the most recent transaction timestamp is different than the local timestamp.The system of claim 9, wherein maintaining a most recent transaction timestamp to be updated upon commit of each of a plurality of transactions comprises: incrementing the most recent transaction timestamp upon commit of each of the plurality of transactions.The system of claim 9, wherein determining if a current read set for the new transaction including reads encountered from starting the new transaction to the current read are valid comprises: determining if logged version values for the current read set are the same as current version values associated with locations read by the read set.A tangible machine readable medium including instructions stored thereon, which when executed, causes the machine to perform the operations of: detecting a new transaction in program code and inserting code, which when executed, causes a machine to perform the operations of:maintaining a most recent transaction timestamp to be updated uponcommit of each of a plurality of transactions;starting a new transaction;in response to encountering a current read in the new transaction,determining if the most recent transaction timestamp has been updated from starting the new transaction to the current read;determining if a current read set for the new transaction including readsencountered from starting the new transaction to the current read are valid in response to determining the most recent transaction timestamp has been updated since starting the new transaction; andcontinuing execution of the new transaction in response to determining themost recent transaction timestamp has not been updated since starting the new transaction or determining the current read set is valid; andaborting the new transaction in response to determining the current readset is not valid.The machine readable medium of claim 13, wherein in response to encountering a current read in the new transaction, determining if the most recent transaction timestamp has been updated from starting the new transaction to the current read comprises:copying the most recent transaction timestamp to a local timestamp uponstarting the new transaction;comparing the most recent transaction timestamp to the local timestamp inresponse to encountering the current read in the new transaction; anddetermining the most recent transaction timestamp has been updated inresponse to the comparing indicating the most recent transaction timestamp is different than the local timestamp.The machine readable medium of claim 13, wherein maintaining a most recent transaction timestamp to be updated upon commit of each of a plurality of transactions comprises: incrementing the most recent transaction timestamp upon commit of each of the plurality of transactions, and wherein determining if a current read set for the new transaction including reads encountered from starting the new transaction to the current read are valid comprises: determining if logged version values for the current read set are the same as current version values associated with locations read by the read set. |
FIELD This invention relates to the field of processor execution and, in particular, to execution of groups of instructions. BACKGROUND Advances in semi-conductor processing and logic design have permitted an increase in the amount of logic that may be present on integrated circuit devices. As a result, computer system configurations have evolved from a single or multiple integrated circuits in a system to multiple cores and multiple logical processors present on individual integrated circuits. A processor or integrated circuit typically comprises a single processor die, where the processor die may include any number of cores or logical processors.The ever increasing number of cores and logical processors on integrated circuits enables more software threads to be executed. However, the increase in the number of software threads that may be executed simultaneously have created problems with synchronizing data shared among the software threads. One common solution to accessing shared data in multiple core or multiple logical processor systems comprises the use of locks to guarantee mutual exclusion across multiple accesses to shared data. However, the ever increasing ability to execute multiple software threads potentially results in false contention and a serialization of execution.For example, consider a hash table holding shared data. With a lock system, a programmer may lock the entire hash table, allowing one thread to access the entire hash table. However, throughput and performance of other threads is potentially adversely affected, as they are unable to access any entries in the hash table, until the lock is released. Alternatively, each entry in the hash table may be locked. However, this increases programming complexity, as programmers have to account for more locks within a hash table.Another data synchronization technique includes the use of transactional memory (TM). Often transactional execution includes speculatively executing a grouping of a plurality of micro-operalions, operations, or instructions, In the example above, both threads execute within the hash table, and their accesses are monitored/tracked, If both threads access/alter the same entry, one of the transactions may be aborted to resolve the conflict. One type of transactional execution includes a Software Transactional Memory (STM), where accesses are tracked, conflict resolution, abort tasks, and other transactional tasks are performed in software.In one implementation, versions of read operations are tracked to maintain consistency and detect conflicts. However, in a typical STM, validation of the read operations is not done until a transaction is to be committed. Therefore, if a invalidating action, such as a conflict, occurs during a transaction, some data may become inconsistent and the use of the inconsistent data may lead to program exception or infinite looping. Furthermore, execution cycles are potentially wasted in executing the rest of the transaction to discover an inconsistency occurred. BRIEF DESCRIPTION OF THE DRAWINGS The present invention is illustrated by way of example and not intended to be limited by the figures of the accompanying drawings.Figure 1 illustrates an embodiment of a system capable of transactional execution.Figure 2 illustrates an embodiment of a Software Transactional Memory (STM) system.Figure 3 illustrates an embodiment of utilizing a global timestamp in an STM to detect conflicts in an exemplary transaction.Figure 4a illustrates an embodiment of a flow diagram for a method of efficient on demand transactional validation.Figure 4b illustrates an embodiment a continued flow from Figure 4a .Figure 4c illustrates an embodiment a continued flow from Figure 4a and Figure 4b .Figure 5 illustrates an embodiment of a flow diagram for a method of inserting instructions in program code to perform efficient on demand transactional execution. DETAILED DESCRIPTION In the following description, numerous specific details are set forth such as examples of specific hardware support for transactional execution, specific tracking/meta-data methods, specific types of local/memory in processors, and specific types of memory accesses and locations, etc. in order to provide a thorough understanding of the present invention. It will be apparent, however, to one skilled in the art that these specific details need not be employed to practice the present invention. In other instances, well known components or methods, such as coding of transactions in software, demarcation of transactions, specific multi-core and multi-threaded processor architectures, interrupt generation/handling, cache organizations, and specific operational details of microprocessors, have not been described in detail in order to avoid unnecessarily obscuring the present invention.A value, as used herein, includes any known representation of a number, a state, a logical state, or a binary logical state. Often, the use of logic levels, logic values, or logical values is also referred to as l's and 0's, which simply represents binary logic states. For example, a 1 refers to a high logic level and 0 refers to a low logic level. However, other representations of values in computer systems have been used. For example the decimal number 10 may also be as a binary value of 1010 and a hexadecimal letter A.Moreover, states may be represented by values or portions of values. As an example, a locked state may be represented by a first value in a location, such as an odd number, while a version number, such as an even value, in the location represents an unlocked state. Here, a portion of the first and second value may be used to represent the states, such as two lower bits of the values, a sign bit associated with the values, or other portion of the values.The method and apparatus described herein are for efficient and consistent validation in a software transactional memory (STM) system. Specifically, efficient and consistent validation in a software transactional memory (STM) system is primarily discussed in reference to multi-core processor computer systems. However, the methods and apparatus for efficient and consistent validation in a software transactional memory (STM) system are not so limited, as they may be implemented on or in association with any integrated circuit device or system, such as cell phones, personal digital assistants, embedded controllers, mobile platforms, desktop platforms, and server platforms, as well as in conjunction with other resources, such as hardware/software threads, that utilize transactional memory.Referring to Figure 1 , an embodiment of a processor capable of efficient and consistent validation in a software transactional memory (STM) system is illustrated. In one embodiment, processor 100 is a multi-core processor capable of executing multiple threads in parallel. However processor 100 may include any processing element, such as an embedded processor, cell-processor, microprocessor, or other known processor, which is capable of executing one thread or multiple threads. As an illustrative example, a simplifed embodiment of an out-of-order architecture for a processor is illustrated in Figure 1 . The modules shown in processor 100, which are discussed in more detail below, are potentially implemented in hardware, software, firmware, or a combination thereof. Note that the illustrated modules are logical blocks, which may physically overlap the boundaries of other modules, and may be configured or interconnected in any manner, In addition, the modules as shown in Figure 1 are not required in processor 100. Furthermore, other modules, units, and known processor features may also be included in processor 100. Bus interface module 105 is to communicate with a device, such as system memory 175, a chipset, a norh bridge, or other integrated circuit. Typically bus interface module 105 includes input/output (I/O) buffers to transmit and receive bus signals on interconnect 170. Examples of interconnect 170 include a Gunning Transceiver Logic (GTL) bus, a GTL+ bus, a double data rate (DDR) bus, a pumped bus, a differential bus, a cache coherent bus, a point-to-point bus, a multi-drop bus or other known interconnect implementing any known bus protocol.Processor 100 is coupled to memory 175 , which may be dedicated to processor 100 or shared with other devices in a system. Examples of memory 175 includes dynamic random access memory (DRAM), static RAM (SRAM), non-volatile memory (NV memory), and long-term storage. Bus interface unit 105 as shown is also to communicate with higher level cache 110. Higher-level cache 110 is to cache recently fetched and/or operated on elements. In one embodiment, higher-level cache 110 is a second-level data cache. However, higher level cache 110 is not so limited, as it may be or include instruction cache 115 to store recently fetched/decoded instructions. Instruction cache 115 , which may also be referred to as a trace cache, is illustrated before fetch logic 120 and decode logic 125 .Here, instruction cache 115 stores recently fetched instructions that have not been decoded. Yet, instruction cache 115 is potentially placed after fetch logic 120 and/or after decode logic 125 to store decoded instructions.Fetch logic 120 is to fetch data/instructions to be operated on/executed. Although not shown, in one embodiment, fetch logic includes or is associated with branch prediction logic, a branch target buffer, and/or a prefetcher to predict branches to be executed/taken and pre-fetch instructions along a predicted branch for execution. Here, a processor capable of speculative execution potentially prefetches and speculatively executes predicted branches. Decode logic 125 is coupled to fetch logic 120 to decode fetched elements.Allocator and renamer module 150 includes an allocator to reserve resources, such as register files to store instruction processing results and a reorder buffer to track instructions. Unit 150 may also include a register renamer to rename program/instruction reference registers to other registers internal to processor 100. Reorder/retirement module 155 includes components, such as the reorder buffers mentioned above, to support out-of-order execution and later retirement of instructions executed out-of-order. In one embodiment, where processor 100 is an in-order execution processor, re-order/retirement module 155 may not be included.Scheduler and execution module 160, in one embodiment, includes a scheduler unit to schedule instructions/operations on execution units. Register files associated with execution units are also included to store information instruction processing results. Exemplary execution units include a floating point execution unit, an integer execution unit, a jump execution unit, a load execution unit, a store execution unit, and other known execution units.Also shown in Figure 1 is lower level data cache 165. Data cache 165 is to store recently used/operated on elements, such as data operands. In one embodiment, a data translation lookaside buffer (DTLB) is associated with lower level data cache 165 . Often a processor logically views physical memory as a virtual memory space. As a specific example, a processor may include a page table structure to break physical memory into a plurality of virtual pages. Here, a DTLB supports translation of virtual to linear/physical addresses. Data cache 165 may be utilized as a transactional memory or other memory to track tentative accessed during execution of a transaction, as discussed in more detail below.In one embodiment, processor 100 is a multi-core processor. A core often refers to any logic located on an integrated circuit capable of maintaining an independent architectural state, wherein each independently maintained architectural state is associated with at least some dedicated execution resources. In one embodiment, execution resources, such as execution module 160, include physically separate execution units dedicated to each core. However, execution module 160 may include execution units that are physically arranged as part of the same unit or in close proximity; yet, portions of execution module 160 are logically dedicated to each core. Furthermore, each core may share access to processor resources, such as higher level cache 110 In another embodiment, processor 100 includes a plurality of hardware threads. A hardware thread typically refers to any logic located on an integrated circuit capable of maintaining an independent architectural state, wherein the independently maintained architectural states share access to some execution resources. For example, smaller resources, such as instruction pointers, renaming logic in rename allocater logic 150 , an instruction translation buffer (ILTB) may be replicated for each hardware thread, white, resources, such as re-order buffers in reorder/retirement unit 155 , load/store buffers, and queues may be shared by hardware threads through partitioning. Other resources, such as low-level data-cache and data-TLB 165, execution unit(s) 160, and parts of out-of-order unit 155 are potentially fully shared.As can be seen, as certain processing resources are shared and others are dedicated to an architectural state, the line between the nomenclature of a hardware thread and core overlaps. Yet often, a core and a hardware thread are viewed by an operating system as individual logical processors, with each logical processor being capable of executing a thread. Logical processors may also be referred to herein as resources or processing resources. Therefore, a processor, such as processor 100, is capable of executing multiple threads on multiple logical processors/resources. Consequently, multiple transactions may be simultaneously and/or concurrently executed in processor 100 .A transaction includes a grouping of instructions, operations, or micro-operations, which may be grouped by hardware, software, firmware, or a combination thereof. For example, instructions may be used to demarcate a transaction. Typically, during execution of a transaction, updates to memory are not made globally visible until the transaction is committed. While the transaction is still pending, locations loaded from and written to within a memory are tracked. Upon successful validation of those memory locations, the transaction is committed and updates made during the transaction are made globally visible. However, if the transaction is invalidated during its pendancy, the transaction is restarted without making the updates globally visible. As a result, pendancy of a transaction, as used herein, refers to a transaction that has begun execution and has not been committed or aborted, i.e. pending. Two example systems for transactional execution include a Hardware Transactional Memory (HTM) system and a Software TransactionalMemory (STM) system.A Hardware Transactional Memory (HTM) system often refers to tracking access during execution of a transaction with processor 100 in hardware of processor 100. For example, a cache line 166 is to cache data item/object 176 in system memory 175 . During execution of a transaction, amotation/attribute field 167, which is associated with cache line 166 is utilized to track accesses to and from line 166. For example, attribute field 167 includes a transaction read bit to track if cache line 166 has been read during execution of a transaction and a transaction write bit to track if cache line 166 has been written to during execution of the transaction.Attribute field 167 is potentially used to track accesses and detect conflicts during execution of a transaction, as well as upon attempting to commit the transaction. For example, if a transaction read bit in field 167 is set to indicate a read from line 166 occurred during execution of a transaction and a store associated with line 166 from another transaction occurs, a conflict is detected. Examples of utilizing an attribute field for transactional execution is included in co-pending application with serial number _ and attorney docket number 042390.P20165 entitled, "Transaction based shared data operations in a Multiprocessor Environment."A Software Transactional Memory (STM) system often refers to performing access tracking, conflict resolution, or other transactional memory tasks in software. As a general example, compiler 179 in system memory 175, when executed by processor 100 ,compiles program code to insert read and write barriers into load and store operations, accordingly, which are part of transactions within the program code. Compiler 179 may also insert other transaction related operations, such as commit or abort operations.As shown, cache 165 is still to cache data object 176, as well as meta-data 177 and transaction descriptor 178. However, meta-data location 177 is associated with data item 176 to indicate if data item 176 is locked. A read log, which may be present in transaction descriptor 178, is used to log read operations, while a write buffer or other transactional memory, which may include lower-levei data cache 165, is used to buffer or log write operations. Inserted calls for validation and commit utilize the logs to detect conflicts and validate transaction operations.Referring to Figure 2 , an embodiment of a Software Transactional Memory (STM) system is illustrated. Data object 201 includes any granularity of data, such as a bit, a word, a line of memory, a cache line, a table, a hash table, or any other known data structure or object. For example, a programming language defined data object is data object 201. Transactional memory 205 includes any memory to store elements associated with transactions. Here, transactional memory 205 comprises plurality of lines 210, 215 , 220, 225, and 230 . In one embodiment, memory 205 is a cache memory. As an example, data object 201 is to be stored aligned in cache line 215 . Alternatively, data object 201 is capable of being stored unaligned in memory 205. In one example, each data object is associated with a meta-data location in array of meta-data 240. As an illustrative embodiment, an address associated with cache line 215 is hashed to index array 240, which associates meta-data location 250 with cache line 215 and data object 201 .Note that data object 201 may be the same size of, smaller than (multiple elements per line of cache), or larger than (one element per multiple lines of cache) cache line 215 . In addition, meta-data location 250 may be associated with data object 201 and/or cache line 215 in any manner.Usually, meta-data location 250 represents whether data object 201 is locked or available. In one embodiment, when data object 201 is locked, meta data location 250 includes a first value to represent a locked state, such as read/write owned state 252. Another exemplary lock state is a Single Owner Read Lock (SORL) state, which is discussed in more detail in co-pending related application entitled, "A mechanism for Irrevocable Transactions," with a serial number _ and attorney docket number 042390.P24817. Yet, any lock or lock state may be utilized and represented in meta-data location 250 .When unlocked, or available, meta-data location 250 includes a second value. In one embodiment, the second value is to represent version number 251 . Here, version number 251 is updated, such as incremented, upon a write to data object 201, to track a current version of data object 201. As an example to illustrate operation of the embodiment shown in Figure 2 , in response to a first read operation in a transaction referencing data object 201 /cache line 215, the read is logged in read log 265 .In one embodiment read log 265 is included in transaction descriptor 260. Transaction descriptor may also include write space 270 ,as well as other information associated with a transaction, such as transaction identifier (ID) 261 , irrevocable transaction (IRT) indicator 262, and other transaction information However, write space 270 and read log 265 are not required to be included in transaction descriptor 260. For example, write space 270 may be separately included in a different memory space from read log 265 aud/or transaction descriptor 260 .Irrevocable transactions and transaction descriptors are discussed in more detail in co-pending related application entitled, "A mechanism for Irrevocable Transactions," with a serial number _ and attorney docket number 042390.P24817.In one embodiment, logging a read includes storing version number 251 and an address associated with data object 201 or cache 215 in read log 265 .Here, assume version number 251 is one to simplify the example. Upon encountering a write referencing an address associated with data object 201 , the write is potentially logged or tracked as a tentative update. In addition, the meta-data location is updated to a lock value, such as two, to represent data object 201 is locked by the transaction or resource executing the transaction. In one embodiment, the lock value is updated utilizing an atomic operation, such as a read, modify, and write (RMW) instruction. Examples of RMW instructions include Bit-test and Set, Compare and Swap, and Add.In one embodiment, write space 270 is a buffer that buffers/stores the new value to be written to data object 201 . Here, in response to a commit, the new values are "written-back" to their corresponding locations, while in response to an abort the new values in write space 270 are discarded. In another embodiment, the write updates cache line 215 with a new value, and an old value 272 is stored in write space 270 .Here, upon committing the transaction, the old values in the write space are discarded, and conversely, upon aborting the transaction, the old values are restored, i.e. the locations are "rolled-back" to their original values before the transaction. Examples of write space 270 include a write log, a group of check pointing registers, and a storage space to log/checkpoint values to be updated during a transaction.More information on efficient checkpointing and roll-back for transactions is discussed in co-pending related application entitled, "Compiler Technique for Efficient Register Checkpointing to Support Transaction Roll-back," with serial number _ and attorney docket number 042390.P24802.Continuing the example from above, whether write space 270 is utilized as a write-buffer, a write-log, or not at all, the write, when committed, releases lock 250. In one embodiment, releasing lock 250 includes returning meta-data location 250 to a value of one to represent an unlocked state. Alternatively, the value is incremented to represent unlocked version value 251 of three. This versioning allows for other transactions to validate their reads that loaded data object 201 by comparing the other transactions logged version values in their read logs to current version value 251 .The example above includes one embodiment of implementing an STM; however, any known implementation of an STM may be used. STMs are discussed in the following articles: " Implementing a High Performance Software Transactional Memory for a Multi-core Runtime" by Bratin Saha, Ali-Reza Adl-Tabatabai, Rick Hudson, Chi Cao Minh, and Ben Hertzberg, Proceedings of the eleventh ACM: SIGPLAN symposium on Principles and practice of parallel programming ; " Software Transactional Memory" by N. Shavit and D. Tuitou, Proceedings of the Fourteenth ACM SIGACT-SIGOPS Symposium on Principles of Distributed Computing ; " Language Support for Lightweight Transactions", by T.L. Harris and K. Fraser, Proceedings of the 2003 ACM SIGPLAN Conference on Object-Oriented Programming Systems, Languages and Applications ; and " Compiler and runtime support for efficient software transactional memory," by Ali-Reza Adl-Tabatabai, Brian Lewis, Vijay Menon, Brian Murphy, Bratin Saha, and Tatiana Shpeisman. Proceedings of the 2006 ACM SIGPLAN conference on Programming language design and implementation ."In fact, any known system for performing transactional memory may also be used, such as an HTM, an STM, an Unbounded Transactional Memory (UTM) system, a hybrid Transactional Memory system, such as a hardware accelerated STM (HASTM), or any other transactional memory system. Co-pending and related application entitled, "Hardware Acceleration of a write-buffering software transactional memory," with serial number _ and attorney docket number P24805 discusses hardware acceleration of an STM. Co-pending application entitled, "Overflow Method for Virtualized Transactional Memory," with serial number _ and attorney docket number 042390.P23547 discusses extending/virlualizing an HTM.Previously, when version 251 is updated by a remote resource, such as a resource not executing the current transaction, updated version 251 indicates a write to line 215 by the remote resource occurred during execution of the current transaction. As a result, a previous load from line 215 in the current transaction may become invalid; yet, the invalid previous load is not detected from version 251 until an attempt to commit the current transaction when the read set of the current transaction is validated.Therefore, in one embodiment an efficient and consistent STM. capable of on demand validation includes operations to check version 251 before and after a load/read is performed. For example, when a read/load of data object 201 from line 215 in a first transaction is detected, a call to a read barrier is inserted before the load. In one embodiment, the read barrier is to check meta-data location 250. As discussed above, if meta-data 250 includes a first unlocked value in version 251, then the first unlocked version is logged in read log 265. If meta-data 250 includes a locked value, then in one embodiment, the current transaction waits until meta-data 250 is set to an unlocked value.Now assume, the load is performed and then a remote resource updates data object 201 and modifies version 251 to a locked value and then a second unlocked value in version 251. As a result, the load has become invalid, i.e. a remote resource wrote to a location loaded by the first pending transaction, which is represented by the logged version first unlocked value being different from the current/subsequent second unlocked version value. Here, a check version barrier is also inserted after the load in the first transaction. The check version barrier is to get the current/subsequent version and compare it to the logged version in read log 265. Consequently, if the remote resource update occurs after the read and before the check version barrier is called, then the check version barrier detects the change in version 251 and is able to abort the transaction at that point instead of waiting until transaction commit to detect the invalid load in a read set logged in read log 265. Yet, in the example above if the invalidating write by the remote resource occurs vefore the read or after the version check barrier, then the invalid load may potentially go undetected until an attempt at transaction commit. Therefore, in one embodiment, a global timestamp is utilized to track the versions of the latest/most recent committed transaction. As an example, a first value in the global timestamp is copied into a local timestamp for a first transaction. In this case, read barriers before loads still read version 251. In addition, version 251 is compared with the local timestamp.If version 251 is greater than the local timestamp, the current read set of the first transaction is validated. Essentially, version 251 being greater than the local timestamp copied from the global timestamp at the start of the transaction potentially indicates another transaction or remote resource updated line 215. To illustrate, assume a global timestamp is initialized to zero. A first and second transaction start copying the zero into their local timestamps. The first transaction loads from line 215 and the second transaction writes to line 215. Then, the second transaction commits. Here, during commit, the second transaction increments the global timestamp to one and uses the global timestamp value of one for the versions, such as version 251, associated with line 215 updated during the second transaction. Now assume the first transaction is to perform another load from line 215. When the read barrier is executed, a version of one is read from meta-data 250, which is greater then the local timestamp of the first transaction, which holds a value of zero. Therefore, the previous load in the first transaction is validated, i.e. the logged version value of zero is compared to the current version value of one. As the versions are different, indicating a conflict, the first transaction is potentially aborted at this point in the transaction.Turning to Figure 3 , an embodiment of executing an exemplary transaction in an STM capable of utilizing inserted version check barriers and timestamps is illustrated. Here, memory location 305 and 310 are to store elements, data elements, instruction, data objects, etc. As discussed in reference to Figure 2 , memory locations 305 and 310 are associated with meta-data (MD) locations 306 and 311, respectively, in any manner. For example, memory locations 305 and 310 are cache lines to store data objects, and an address referencing the data objects or cache liens 305 and 310 are hashed to index to MD locations 306 and 311, respectively.In addition, Global TimeStamp (GTS) 315 is to store a GTS value. In one embodiment, GTS is a variable stored in any known manner, such as on a program stack, in memory, in registers, or other storage elements. Resource 301 and 302 includes cores and/or threads, which are capable, of concurrent or simultaneous execution. For example, resources 301 and 302 may be cores on a singly physical microprocessor die to execute transaction 303 and 304, at least in part, concurrently. Local TimeStamps (LTS') 316 and 317 are variables, such as local variables initialized in transaction 303 and 304, respectively.To illustrate the exemplary operation, at the start of execution of transaction 303, LTS 316 is loaded with GTS 315 in operation 320, to copy the initial GTS value of zero into LTS 316. Similarly, at the start of execution of transaction 304, in operation 330, GTS, 315, still having a value of zero, is loaded into LTS 317, which is associated with transaction 304. First, write barrier 350, inserted before store operation 321 is to perform a write barrier operation, which includes any transaction task associated with a write/store. For example, write barrier 350 is to acquire a lock associated with a referenced location. Here, a lock for location 305 is acquired. Then, store 321 is performed. As stated above, store 321 may update location 305 and log an old value in location 305 or store 321 may buffer a new value in a write buffer and retain the old value in location 305. Next, in transaction 304, inserted read barrier 355 is encountered before load 331 from location 360. Here, meta-data location associated with location 310 is checked. Location 360 indicates an unlocked value of zero. The unlocked value of zero is compared to LTS 317, which also has a value of zero. As LTS 317 and meta-data location 311 hold the same value of zero, no read set validation is performed and the version zero is logged as a logged version. Load 331 from location 310 is performed. After load 331, version check barrier 360 checks meta-data 311 for a second, current, or subsequent version. If the second version is different from the first version, a modification to memory location 310 has potentially occurred and transaction 304 is potentially aborted.Here, the version at version check barrier 360 is the same as the version logged at read barrier 355, so execution of transaction 304 continues. Next, in transaction 303, write barrier 350 is encountered. A lock for location 310 is acquired. Store 322 is performed to store a value of 99 in location 310. Transaction 303 then commits. During commit, GTS 315 is incremented, such as from a value of zero to one. In a write-buffering STM, the new values are written back to locations 305 and 310. In a roll-back STM, the old logged values are discarded. In addition, incremented GTS 315 of one is utilized as the version for writes performed during transaction 303. Therefore, meta-data locations 306 and 311 are updated to a version of one, which is the value of GTS 315. As transaction 304, is still executing on resource 302, read barrier 355 is encountered. Here, meta-data location 306 is checked, as it is associated with location 305, which is to be loaded from by operation 332. Meta-data location 311, which was updated by remote resource 301 to one, is greater than local timestamp 317 of zero. Therefore, in one embodiment, local timestamp 317 is loaded or re-loaded with the current value in GTS 315 of one.In addition, a read set of transaction 304 is validated. Here, logged value of meta-data 311 at read barrier 355 before operation 331 is a zero and the current value of meta-data 311 is one. As a result, it is determined that load 331, i.e. the read set of transaction 304, and transaction 304 may be aborted and re-executed at this point instead of wasting execution cycles executing to an attempted commit.Turning to Figure 4a , an embodiment of a flow diagram for a method of efficiently and consistently executing a transaction in an STM is illustrated. In flow 405, execution of a transaction is started. In one embodiment, a compiled start transaction instruction is executed to start execution of the transaction. As an example, a call to a start transaction function is inserted by the compiler, when executed, perform initialization and other tasks, such as the task in flow 410. In flow 410, a first global timestamp (GTS) value is stored in a first local timestamp (LTS) associated with a first transaction, in response to starting execution of the first transaction. Examples of operations to store the first GTS in the first LTS include a copy to copy the first GTS to the first LTS, a load to read the GTS, and a store to store the first GTS in the first LTS.Next in flow 415, a read operation, which is included in the first transaction and references a first address and/or a first data object, is encountered in the first transaction. Note that a compiler may insert a read barrier before the read operation. As a result, encountering a read operation includes encountering a read barrier associated with a read operation. As an example, a call to a read barrier function is inserted before the read operation to perform flows 420, 421, 425, 426, 427, 428, 430, and 435. In flow 420, it is determined if the first address is unlocked. In one embodiment, a meta-data location associated with the data object/address is checked, In flow 421, if the address is not unlocked, i.e. the meta-data location represents a locked value, then execution waits in flow 421 until the address is unlocked. Here, if waiting leads to a deadlock, the transaction may be aborted.After the address is unlocked, it is determined in flow 425 if a first version in the meta-data location is greater then the LTS. In response to the first version being grater than the LTS, the local timestamp is reloaded with a current GTS in flow 426. Furthermore, in flow 427, a plurality of previous read operations in the first transaction, such as a read set of/associated with the first transaction, are validated. If the read set is determined not to be valid in flow 427, then the first transaction is aborted in flow 428. Whether the first version is not greater then LTS in flow 425 or it is determined that the read set is valid in flow 426, the read operation, i.e. the first/current version is logged in flow 430. Next, in flow 435, the read operation is performed. After performing the current read operation, execution flows through 436 to flow 460 in Figure 4c . Here, a subsequent/second version associated with the first address is checked/determined in flow 460. If the first and second version, i.e. the logged current version and the subsequent version or the version logged before the read and determined after the read, are not the same, then in flow 428 the first transaction is aborted.Turning back to Figure 4b , after initializing the first transaction in flows 405 and 410 from Figure 4a , a write/store operation, which references the address or data-object may be encountered at anytime within the first transaction, as represented by the execution flow through 411 from Figure 4a to Figure 4 b. Similar to a read/load operation, a write barrier is potentially inserted before the write operation to perform write barrier tasks, such as in flows 445 and 446. Here, in flow 445, it is determined if a lock for the address/data object is already acquired. If a lock is already acquired the write may be directly performed in flow 447. However, if not lock has been acquired, then a lock is acquired in flow 446. Note that in an alternate implementation of an STM, write locks may be acquired during commit instead of during execution of a transaction as illustrated here.Next, whether through flow 450 from Figure 4b to 4c , i.e. after a write operation, or after flow 460, i.e. after a read operation and version check barrier, a commit of the transaction is encountered. In one embodiment, a commit transaction instruction, such as a call to a commit function is encountered to perform commit operations, such as the tasks in flows 470, 475, 480, and 485. In flow 470, it is determined if the LTS is less than the GTS. In response to the LTS being less than the GTS, in flow 475, each read operation in the first transaction including the plurality of previous read operations from flow 427 and the current read operation logged in flow 430 are validated. In response to the read set not being valid, the first transaction is aborted in flow 428. However, if the read set is valid or in flow 470 the LTS is not less than the GTS, such as greater than or equal to the LTS, then in flow 480, GTS is incremented to an incremented GTS. As examples, the GTS may be incremented by any value, such as in increments of one, two, or other value. Next, in flow 485, the meta-data location is set to at least the incremented GTS value to track that the first transaction is the most recently committed transaction at that time.Turning to Figure 5 , an embodiment of a flow diagram for inserting operations/instructions to perform efficient and consistent validation in an STM is illustrated. In one embodiment, a compiler is executed to compile program code. During compilation instruction and/or operations are inserted.An instruction or operation both refer to code to perform a task/operation or a group or plurality of tasks/operations. For example, often an instruction, or an operation as used herein as well, includes multiple micro-operations. For example, a compare-and-swap instruction includes multiple micro-operations to atomically compare the contents of a memory location to a given value and, if they are the same, modifies the contents of that memory location to a given new value. Therefore, the instructions and operations discussed below, when executed, may include a single or multiple operations/micro-operations to perform a single or multiple tasks/operations.In addition, instructions/operations in program code may be detected and/or inserted in any order. For example, in Figure 5 's flow a load operation is detected in flow 515 and associated load instructions/barriers are inserted in flow 520 and 525, while a store operation is detected in flow 530 and a write barrier is inserted in flow 535. However, a write may be detected before a read. In addition, write and read barriers may be functions inserted at anytime during compilation, such as the beginning or end, and when the store or read operations are detected, calls are inserted to the functions to "insert a barrier before or after an operation."In flow 505, a start transaction instruction is detected. In flow 510, a first instruction is inserted, when executed, to load a global time stamp (GTS) into a local timestamp (LTS) associated with the first transaction. In one embodiment, the global timestamp is to hold a most recent timestamp value of a most recent committed transaction. As an example, a call to a start transaction function is inserted and the first instruction is inserted in the start transaction function. Examples of the first instruction include a copy, a load, a store, or other operation to read a value from GTS and store the value in the LTS, such as a copy operation, when executed, to copy the GTS to the LTS.Next, in flow 515, a load operation referencing an address in the transaction is detected. In flow 520, a read barrier is inserted before the load operation. In one embodiment, a call to a read barrier function is inserted before the load operation, In addition, at some point during compilation, compiler code, when executed, is also to insert the read barrier function. As an example, the read barrier function, when executed, is to determine a first version held in a version location, such as a meta-data location associated with the address. In addition, the read barrier is to determine if the first version is greater than the local timestamp. In response to the first version being greater than the local timestamp: the GTS, such as a current GTS, is reloaded into the LTS, a plurality of previous reads in the first transaction are validated, and the first transaction is aborted in response to one of the plurality of previous reads in the first transaction not being valid. The read barrier also logs the first version for later validation. An illustrative embodiment of pseudo code for a read barrier is illustrated below in Figure A.ReadBarrier():If(OwnLock(m)) {v ← GetVersion()goto Done:v ← WaitOnOtherLockAndGetVersion()if(v > LocalTimeStamp) {LocalTimeStamp ← GlobalTimsStampFor each Logged ()ValidateVersion ()}Log ()Done;Figure A: An illustrative embodiment of pseudo code for a read barrierHere, the read barrier function includes a group of read barrier operations. A first read barrier operation, i.e. v←WaitOnOtherLockAndGetVersion(), when executed, is to wait until the address is unlocked and is to obtain the current version. A second read barrier operation, i.e. if(v>Loca)TimeStamp), when executed, is to determine if the current version greater than the local time stamp. In addition, a group of validation operations, such as For each Logged () and ValidateVersion (), when executed in response to the current version being greater than the local time stamp, is to validate a plurality of previous reads in the transaction. A third read barrier operation, such as LocatTimeStamp ← GtobatTimeStamp, when executed in response to the current version being greater than the local time stamp, is to copy the global time stamp to the local time stamp. Furthermore, a fourth read barrier operation, such as Log (), when executed, is to log the current version in a read log.In flow 525 a call to a version check barrier after the load operation is also inserted. In one embodiment, the version check barrier includes a call to a version check function, when executed, to determine a second version held in the version location associated with the address, determine if the second version is different from the first version, and abort the first transaction, in response to the second version being different from the first version. An illustrative embodiment of pseudo code for a version check barrier function is illustrated below in Figure B.ReadCheck()v_new ← AbortOnOtherLockOrGetVersion(m)If(v != v_new)Abort Figure B: An illustrative embodiment of pseudo code for a version check barrier functionHere, the version check barrier function includes a group of version check barrier operations, when executed, to compare a subsequent version associated with the address to the current logged version. A first version check barrier operation, such as v_new ← AbortOnOtherLockOrGetVersion(m), when executed, is to obtain the subsequent version corresponding to the data address if no other transaction has a lock on the address, or abort the current transaction if another transaction has a lock on the address. A second version check barrier operation, such as If(v != v_new), when executed, is to determine if the subsequent version is different from the current version. In addition, a call to an abort function is to be executed, in response to the subsequent version being different from the current logged version, to abort the transaction.Next, in flow 530, a store operation referencing the address in the first transaction is detected. In embodiment, a call to a write barrier function is inserted before the store operation in flow 535. As an example, the write barrier function, when executed, is to acquire a lock for the address. An illustrative embodiment of pseudo code for a write barrier function is illustrated below in Figure C. Here, a write lock is acquired for the address in response to not already owning a lock for the address. Note that a write set may also be logged, which includes a log write set pseudo code operation before Done below in Figure C. After acquiring the lock, the data is logged in the write set.WriteBarrier():If(OwnLock())goto Done:AcquireLock();Done:Figure C: An illustrative embodiment of pseudo code for a write barrier functionIn flow 540, a commit transaction instruction is detected. In one embodiment, a call to a commit function is inserted. The commit function, when executed, is to determine if the local timestamp is less than the global timestamp. In response to the local timestamp being less than the global timestamp, the commit function, when executed is to determine if a plurality of previous reads in the first transaction are valid and abort the first transaction in response to one of the plurality of previous reads in the first transaction not being valid. Furthermore, the global timestamp to be incremented to an incremented global timestamp, and the version location, i.e. the meta-data location, associated with the address is to be set to the incremented global timestamp. An illustrative embodiment of pseudo code for a commit function is illustrated below in Figure D.Commit():If(LocalTimeStamp < GlobalTimeStamp) {For each Logged ()ValidateVersion ()}v ← ++GlobalTimeStamp;For each Locked m {SetVersion()ReleaseLock ()}Figure D: An illustrative embodiment of pseudo code for a commit functionIn this example, a first commit operation, If(LocalTimeStamp < GlobalTimeStamp), when executed, is to determine if the local time stamp is less than the global time stamp. A group of validation operations, For each Logged () and For each Logged (), when executed in response to the local time stamp being less than the global time stamp, is to validate a plurality of previous reads in the transaction. A second commit operation, v ← ++GlobalTimeStamp, when executed, is to increment the global time stamp to an incremented global time stamp. Additionally, a third commit operation, SetVersion(), when executed, is to set a most recent version associated with the address to the incremented global time stamp in response to a lock being acquired for the address during execution of the transaction. Note, in one embodiment, the transaction is a read only transaction, where the commit operation is potentially omitted.As illustrated above, efficient and consistent validation of an STM may be performed. Previously, an invalidating access is potentially not detected until an attempt to commit the transaction. As a result, during the execution between when the invalid access commits/occurs to the attempt to commit the transaction, the inconsistent data may be used during the execution which may leads to execution exception or infinite looping. Furthermore, the execution cycle during the inconsistent execution is potentially wasted, as the transaction is to be aborted and restarted. Therefore, by inserting version check barriers after loads and utilizing timestamps, invalidating accesses/conflicts are detectable earlier, which allows a transaction to abort without extra wasted execution cycles or incurring program exception of infinite looping. In addition, spurious program errors due to the inconsistency are potentially avoided through on demand validation.The embodiments of methods, software, firmware or code set forth above may be implemented via instructions or code stored on a machine-accessible or machine readable medium which are executable by a processing clement. A machine-accessible/readable medium includes any mechanism that provides (i.e., stores and/or transmits) information in a form readable by a machine, such as a computer or electronic system. For example, a machine-accessible medium includes random-access memory (RAM), such as static RAM (SRAM) or dynamic RAM (DRAM); ROM; magnetic or optical storage medium; flash memory devices; electrical, optical, acoustical or other form of propagated signals (e.g., carrier waves, infrared signals, digital signals); etc.Reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.In the foregoing specification, a detailed description has been given with reference to specific exemplary embodiments. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense. Furthermore, the foregoing use of embodiment and other exemplarily language does not necessarily refer to the same embodiment or the same example, but may refer to different and distinct embodiments, as well as potentially the same embodiment.This disclosure includes all the subject matter recited in the following clauses:1. A method comprising:storing a global timestamp value in a first local timestamp associated with a firsttransaction, in response to starting execution of the first transaction;in response to encountering a current read operation, which is included in the firsttransaction and references a first address, validating a plurality of previous read operations in the first transaction, if a current version associated with the first address is greater than the local time stamp.2. The method of clause 1, wherein in response to encountering the current read operation further checking if the first address is unlocked and logging the current version in response to the first address being unlocked.3. The method of clause 2, further comprisingperforming the current read operation; andafter performing the current read operation,checking a subsequent version associated with the first address, and aborting the first transaction, if the subsequent version is different from thecurrent version.4. The method of clause 3, further comprising: in response to encountering a write operation referencing the first address, acquiring a lock in a meta-data location associated with the first address.5. The method of clause 4, further comprising: committing the first transaction, wherein committing the transaction comprises:validating each read operation in the first transaction including the plurality ofprevious read operations and the current read operation, in response to the first local timestamp being less than the global timestamp;incrementing the global timestamp value to an incremented global timestamp value; andsetting a write version in the meta-data location to at least the incremented global timestamp value.6. The method of clause 1, validating a plurality of previous read operations in the first transaction comprises: determining the plurality of previous read operations are valid, if a plurality of logged versions associated with the plurality of previous read operations correspond to a plurality of current versions associated with the plurality of previous read operations.7. An article of manufacture including program code which, when executed by a machine, causes the machine to perform the operations of:detecting a load operation referencing an address in a transaction,inserting a group of read barrier operations to be executed before the loadoperation, the group of read barrier operations, when executed, to obtain a current version associated with the address;inserting a group of version check barrier operations after the load operation, thegroup of version check barrier operations, when executed, to compare a subsequent version associated with the address to the current version.8. The article of manufacture of clause 7, further comprisinginserting a call to a read barrier function before the load operation, wherein the readbarrier function includes the group of read barrier operations, and inserting a call to a version check barrier function after the load operation, whereinthe version check barrier function includes the group of version check barrier operations;9. The article of manufacture of clause 7, wherein the group of version check barrier operations include:a first version check barrier operation, when executed, to obtain the subsequentversion;a second version check barrier operation, when executed, to determine if thesubsequent version is different from the current version; anda call to an abort function, when executed in response to the subsequent versionbeing different from the current version, to abort the transaction.10. The article of manufacture of clause 7, further comprising:detecting a start transaction instruction; andinserting a copy operation to be executed at the start of the transaction, the copyoperation, when executed, to copy a global time stamp to a local time stamp to be associated with the transaction.11. The article of manufacture of clause 10, wherein the group of read barrier operations include:a first read barrier operation, when executed, to obtain the current version;a second read barrier operation, when executed, to determine if the current versiongreater than the local time stamp; anda group of validation operations, when executed in response to the current versionbeing greater than the local time stamp, to validate a plurality of previous reads in the transaction.12. The article of manufacture of clause 11, wherein the group of read barrier operations further includea third read barrier operation , when executed in response to the current versionbeing greater than the local time stamp, to copy the global time stamp to the local time stamp; anda fourth read barrier operation, when executed, to log the current version in a readlog.13. The article of manufacture of clause 10, further comprising:detecting a commit transaction instruction;inserting a group of commit operations to be executed in response to committingthe transaction, the group of commit operations including:a first commit operation, when executed, to determine if the local timestamp is less than the global time stamp; anda group of validation operations, when executed in response to the localtime stamp being less than the global time stamp, to validate aplurality of previous reads in the transaction.14. The article of manufacture of clause 13, wherein the group of commit operations further include:a second commit operation, when executed, to increment the global timestamp to an incremented global time stamp;a third commit operation, when executed, to set a most recent versionassociated with the address to the incremented global time stamp in response to a lock being acquired for the address during execution of the transaction.15. A system comprising:a memory device to store compiler code and program code; anda processor associated with the memory device, the processor to execute thecompiler code, wherein the compiler code, when executed, is to:detect a first transaction in the program code;insert a first instruction, when executed, to load a global time stamp into alocal timestamp associated with the first transaction, wherein the global timestamp is to hold a most recent local timestamp value of a most recent committed transaction.16. The system of clause 15, wherein the compiler code, when executed, is also to detect a load operation referencing an address in the first transaction;insert a call to a read barrier function before the load operation; andinsert a call to a version check function after the load operation.17. The system of clause 16, wherein the compiler code, when executed, is also to insert the read barrier function, the read barrier function, when executed, to determine a first version held in a version location associated with the address; determine if the first version is greater than the local timestamp;in response to the first versions being greater than the local timestamp:reload the global timestamp into the local timestamp;determine if a plurality of previous reads in the first transaction are valid;andabort the first transaction in response to one of the plurality of previous reads in the first transaction not being valid; andlog the first version.18. The system of clause 17, wherein the compiler code, when executed, is also to insert the version check function, the version check function, when executed, to:determine a second version held in the version location associated with the address;determine if the second version is different from the first version; andabort the first transaction, in response to the second version being different fromthe first version.19. The system of clause 18, wherein the compiler code, when executed, is also to:detect a store operation referencing the address in the first transaction;insert a call to a write barrier function before the store operation, the write barrierfunction, when executed, to acquire a lock for the address.20. The system of clause 19, wherein the compiler code, when executed, is also to:detect a commit instruction in the first transaction;insert a call to a commit function in response to detecting the commit instruction,wherein the commit function, when executed, is to:determine if the local timestamp is less than the global timestamp;in response to the local timestamp being less than the global timestamp,determine if the plurality of previous reads in the first transactionare valid; andabort the first transaction in response to one of the plurality of previous reads in the first transaction not being valid;increment the global timestamp to an incremented global timestamp;modify the version location associated with the address to the incremented global timestamp. |
A capacitor device includes a first electrode having a first metal alloy or a metal oxide, a relaxor ferroelectric layer adjacent to the first electrode, where the ferroelectric layer includes oxygen and two or more of lead, barium, manganese, zirconium, titanium, iron, bismuth, strontium, neodymium, potassium, or niobium and a second electrode coupled with the relaxor ferroelectric layer, where the second electrode includes a second metal alloy or a second metal oxide. |
1.A capacitor device, including:A first electrode, the first electrode including a first metal alloy or a first metal oxide;A ferroelectric layer, the ferroelectric layer is adjacent to the first electrode, the ferroelectric layer includes two or more of lead, barium, manganese, zirconium, titanium, iron, bismuth, strontium, neodymium, potassium or niobium More species, and oxygen; andA second electrode, the second electrode is coupled with the ferroelectric layer, and the second electrode includes a second metal alloy or a second metal oxide.2.The capacitor device according to claim 1, wherein the ferroelectric layer includes a combination of one of magnesium or zirconium and lead, niobium, and oxygen.3.The capacitor device according to claim 1, wherein the ferroelectric layer includes a first combination of Pb, Mg, Nb, and O and a second combination of Pb, Ti, and O, wherein the Mg in the ferroelectric layer The atomic percentage of Nb and Nb is greater than the atomic percentage of Ti in the ferroelectric layer.4.The capacitor device according to claim 1, wherein the concentration of the first combination is at most 100% greater than the concentration of the second combination.5.The capacitor device according to claim 1, wherein the ferroelectric layer includes a first combination of Pb, Mg, Nb, and O and a second combination of Ba, Ti, and O, wherein the Pb in the ferroelectric layer The atomic percentages of, Mg, and Nb are greater than the atomic percentages of Ba and Ti in the ferroelectric layer.6.The capacitor device according to claim 1, wherein the ferroelectric layer includes a first combination of Pb, Mg, Nb, and O and a second combination of Bi, Fe, and O, wherein the Pb in the ferroelectric layer The atomic percentages of, Mg, and Nb are greater than the atomic percentages of Bi and Fe in the ferroelectric layer.7.The capacitor device of claim 1, wherein the ferroelectric layer includes a combination of PbMgxNb1-xO, BaTiO3, PbTiO3, and BiFeO3.8.The capacitor device according to claim 1, wherein the ferroelectric layer includes a first combination of Pb, Mg, Nb, and O and a second combination of Pb, Zr, and O, wherein the Mg in the ferroelectric layer The atomic percentage of Nb is greater than the atomic percentage of Zr in the ferroelectric layer.9.The capacitor device according to any one of claims 1-8, wherein the ferroelectric layer has a thickness between 5 nm and 50 nm.10.The capacitor device according to claim 1, wherein the ferroelectric layer includes a combination of Ba oxide, Ti oxide, and Nd oxide.11.The capacitor device according to claim 1, wherein the ferroelectric layer is a first ferroelectric layer (104), and the capacitor device further comprises the first ferroelectric layer and the first electrode or the The second ferroelectric layer (110) between the second electrodes.12.The capacitor device according to claim 11, wherein the second ferroelectric layer includes two or more of lead, barium, manganese, zirconium, titanium, iron, bismuth, neodymium, strontium, or niobium, and oxygen And wherein the material of the first ferroelectric layer is different from the material of the second ferroelectric layer.13.The capacitor device according to claim 11, wherein the second ferroelectric layer includes hafnium, oxygen, and is doped with one or more of Zr, Al, Si, N, Y, or La.14.The capacitor device according to claim 11, wherein the first ferroelectric layer has a dielectric constant between 100-2200, and the second ferroelectric layer has a dielectric constant between 20-50 .15.The capacitor device according to claim 11, wherein the first ferroelectric layer has a thickness between 4nm and 49nm, and the second ferroelectric layer has a thickness between 1nm and 46nm, wherein The combined thickness of the first ferroelectric layer and the second ferroelectric layer is between 5 nm and 50 nm.16.A capacitor device, including:A first electrode, the first electrode including a first metal alloy or metal oxide;A multi-layer stack, the multi-layer stack is adjacent to the first electrode, and the multi-layer stack includes:Two-layer stack, including:One of the first relaxed ferroelectric layer or the first non-relaxed ferroelectric layer, and one of the first relaxed ferroelectric layer or the first non-relaxed ferroelectric layer includes lead, barium, manganese, Two or more of zirconium, titanium, iron, bismuth, strontium, neodymium or niobium, and oxygen; andOne of the second relaxed ferroelectric layer or the second non-relaxed ferroelectric layer, one of the second relaxed ferroelectric layer or the second non-relaxed ferroelectric layer is in the first relaxed ferroelectric layer On one of the ferroelectric layer or the first non-relaxed ferroelectric layer;A third relaxor ferroelectric layer, the third relaxor ferroelectric layer is on the two-layer stack, wherein the material included in the third relaxor ferroelectric layer is the same as the material of the first ferroelectric layer Basically the same; andThe second electrode, which is coupled with the third relaxor ferroelectric layer, includes a second metal alloy.17.The capacitor device according to claim 16, wherein the multilayer stack includes a plurality of double layers, wherein the number of the plurality of double layers is in the range of 1 to 10, wherein the material layer stack It has a thickness between 5 nm and 50 nm, and wherein the two-layer stack has a thickness between 4 nm and 49 nm, and the third relaxor ferroelectric layer includes a thickness of at least 1 nm.18.The capacitor device according to claim 16, wherein the third relaxed ferroelectric layer includes a material that is substantially the same as that of the first ferroelectric layer.19.A system including:An integrated circuit comprising the capacitor device according to any one of claims 16 to 18; andA display device coupled to the integrated circuit, the display device displaying an image based on a signal communicated with the integrated circuit.20.A system including:An integrated circuit comprising the capacitor device according to any one of claims 1 to 15; andA display device coupled to the integrated circuit, the display device displaying an image based on a signal communicated with the integrated circuit. |
Relaxation ferroelectric capacitor and manufacturing method thereofBackground techniqueGenerally, ferroelectric materials have various applications in the modern electronics industry. Examples of some applications of ferroelectric materials include the use in capacitors and transistors. Capacitors in integrated circuits can be used to create storage devices or for circuit decoupling. In these applications, ferroelectric materials can be used to increase capacitance or to reduce leakage current density. Therefore, there is a continuing need to improve capacitance by using materials that achieve higher dielectric strength while minimizing leakage current.Description of the drawingsIn the drawings, the materials described herein are shown by way of example and not limitation. For simplicity and clarity of description, the elements shown in the drawings are not necessarily drawn to scale. For example, for clarity, the size of some elements may be exaggerated relative to other elements. Moreover, for clarity of discussion, various physical features may be represented in their simplified "ideal" forms and geometric shapes, but nevertheless, it should be understood that actual implementations may only approximate the ideal forms shown. For example, it is possible to draw a smooth surface and a right-angle intersection regardless of the limited roughness, corner rounding, and imperfect angular intersection characteristics of the structure formed by nanofabrication technology. In addition, where deemed appropriate, reference numerals have been repeated in the drawings to indicate corresponding or similar elements.FIG. 1A shows a cross-sectional view of a capacitor including a relaxed ferroelectric layer according to an embodiment of the present disclosure.FIG. 1B shows a cross-sectional view of a capacitor including a relaxed ferroelectric layer and electrodes including three layers of different materials according to an embodiment of the present disclosure.FIG. 1C shows a cross-sectional view of a capacitor including a first relaxed ferroelectric layer and a second ferroelectric layer according to an embodiment of the present disclosure.FIG. 1D shows a cross-sectional view of a capacitor including a multilayer stack having a double-layer stack according to an embodiment of the present disclosure, wherein the double-layer stack includes a first relaxor ferroelectric layer and a second relaxor ferroelectric layer. The ferroelectric layer covered by the electric layer.1E shows a cross-sectional view of a capacitor including a multilayer stack having a plurality of double-layer stacks according to an embodiment of the present disclosure, wherein each of the double-layer stacks includes a first relaxor ferroelectric layer and iron Electrical layer, and wherein a plurality of double-layer stacks are covered by a second relaxor ferroelectric layer.FIG. 2A shows a graph of the comparison of the electric polarization and voltage characteristics of a relaxor ferroelectric material and a non-relaxed ferroelectric material.Figure 2B shows the MIM capacitor test structure connected to a pair of voltage terminals.FIG. 3 shows a cross-sectional view of a trench capacitor including a relaxed ferroelectric layer and a ferroelectric layer according to an embodiment of the present disclosure.4A shows a cross-sectional view of a pair of trench capacitors according to an embodiment of the present disclosure, where each capacitor includes a relaxed ferroelectric layer and a ferroelectric layer adjacent to the relaxed ferroelectric layer.FIG. 4B shows a plan view of the trench capacitors 402 and 404 along the line A-A' in FIG. 4A.4C shows a cross-sectional view of a pair of trench capacitors according to an embodiment of the present disclosure, where each capacitor includes a relaxed ferroelectric layer and a ferroelectric layer adjacent to the relaxed ferroelectric layer.Fig. 5 shows a flowchart of a method for manufacturing a capacitor.FIG. 6A shows the first conductive interconnection layer formed over the substrate.FIG. 6B shows the structure of FIG. 6A after forming a dummy structure on the first conductive interconnection layer according to an embodiment of the present disclosure.FIG. 6C shows the structure of FIG. 6B after the second conductive interconnection layer is formed on the first conductive interconnection layer and adjacent to the dummy structure.FIG. 6D shows the structure of FIG. 6C after removing the dummy structure to form the first opening and the second opening in the second conductive interconnection layer.FIG. 6E shows the structure of FIG. 6D after forming a first trench capacitor in the first opening and forming a second trench capacitor in the second opening according to an embodiment of the present disclosure.Figure 6F shows the structure of Figure 6E after forming a dielectric layer on the second conductive interconnect layer and on the first and second trench capacitors, followed by forming a plurality of openings in the dielectric material.FIG. 6G shows the structure of FIG. 6F after forming a via electrode in each of the plurality of openings in the dielectric material.FIG. 7A shows an electrical diagram showing the coupling between a pair of capacitors.FIG. 7B shows an electrical diagram showing the coupling between three capacitors.FIG. 7C shows an electrical diagram showing the coupling between four capacitors.Figure 8 shows a cross-sectional view of a trench capacitor coupled to a transistor.Figure 9 shows a computing device according to an embodiment of the present disclosure.FIG. 10 shows an integrated circuit (IC) structure including one or more embodiments of the present disclosure.Detailed waysVarious capacitor devices with one or more relaxing ferroelectric materials and covering schemes are described. In the following description, many specific details (for example, structural schemes and detailed manufacturing methods) are explained in order to provide a thorough understanding of the embodiments of the present disclosure. It will be obvious to those skilled in the art that the embodiments of the present disclosure can be practiced without these specific details. In other instances, in order to avoid unnecessarily making the embodiments of the present disclosure difficult to understand, well-known features, such as transistor operations and switching operations associated with capacitors, are described in less detail. In addition, it should be understood that the various embodiments shown in the drawings are illustrative representations and are not necessarily drawn to scale.In some instances, in the following description, well-known methods and devices are shown in the form of block diagrams rather than in detail to avoid making the present disclosure difficult to understand. Throughout this specification, references to "an embodiment" or "one embodiment" or "some embodiments" mean that a particular feature, structure, function, or characteristic described in conjunction with the embodiment is included in at least one embodiment of the present disclosure middle. Therefore, the appearance of the phrase "in an embodiment" or "in one embodiment" or "some embodiments" in various places throughout this specification does not necessarily refer to the same embodiment of the present disclosure. In addition, specific features, structures, functions, or characteristics may be combined in any suitable manner in one or more embodiments. For example, in any case where specific features, structures, functions, or characteristics associated with the two embodiments are not mutually exclusive, the first embodiment can be combined with the second embodiment.As used in the specification and appended claims, unless the context clearly dictates otherwise, the singular forms "a" and "the" are also intended to include the plural forms. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more associated listed items.The terms "coupled" and "connected" and their derivatives may be used herein to describe the functional or structural relationship between components. It should be understood that these terms are not intended as synonyms for each other. On the contrary, in certain embodiments, "connected" may be used to indicate that two or more elements are in direct physical, optical, or electrical contact with each other. "Coupled" can be used to indicate that two or more elements are in direct or indirect physical contact, electrical or magnetic contact with each other (with other intervening elements between them), and/or two or more elements cooperate with or cooperate with each other. Effect (for example, causality).The terms "above", "below", "between", and "above" as used herein refer to one of the noteworthy physical relationships among them. The relative position of a component or material relative to other components or materials. For example, in the context of materials, a material placed above or below another material or one material may be in direct contact or may have one or more intervening materials. Moreover, one material disposed between the two materials may be in direct contact with the two layers or may have one or more intervening layers. Instead, the first material "on" the second material is in direct contact with the second material/material. A similar distinction is made in the context of component assembly. As used throughout this specification and in the claims, a list of items connected by the term "at least one" or "one or more" can represent any combination of the listed terms.The term "adjacent" here generally refers to the location of things next to each other (for example, next to or close to one or more things in between) another thing or adjacent to another thing (for example, adjacent to it) ).The term "signal" may refer to at least one current signal, voltage signal, magnetic signal, or data/clock signal. The meanings of "a" and "the" include plural forms. The meaning of "in" includes "in" and "on".The term "device" can generally refer to a device according to the context in which the term is used. For example, a device may refer to a stack of layers or structures, a single structure or layer, the connection of various structures having active and/or passive elements, and the like. Generally, the device is a three-dimensional structure having a plane along the xy direction of the xyz Cartesian coordinate system and a height along the z direction. The plane of the device may also be the plane of the device including the device.As used throughout this specification and in the claims, a list of items connected by the term "at least one" or "one or more" can refer to any combination of the listed terms.Unless otherwise indicated in the clear context in which they are used, the terms "substantially equal," "approximately equal," and "approximately equal" mean that there are only incidental changes between the two things described. In the art, such changes are usually at most +/-10% of the predetermined target value.In the specification and claims, the terms "left", "right", "front", "rear", "top", "bottom", "above", "below", etc. (if any Words) are used for descriptive purposes and are not necessarily used to describe permanent relative positions. For example, the terms "above", "below", "front", "back", "top", "bottom", and "on" as used herein are Refers to the relative position of a component, structure, or material relative to other referenced components, structures, or materials in the device, where such a physical relationship is worth noting. These terms are used herein for descriptive purposes only, and are mainly adopted in the context of the z-axis of the device, and therefore may be an orientation relative to the device. Therefore, if the device is inverted relative to the contextual orientation of the figures provided, the first material "above" the second material in the context of the figures provided herein may also be "below" the second material. In the context of materials, a material disposed above or below another material may be in direct contact or may have one or more intervening materials. Moreover, one material disposed between the two materials may be in direct contact with the two layers or may have one or more intervening layers. Instead, the first material "on" the second material is in direct contact with the second material. A similar distinction is made in the context of component assembly.The term "between" can be used in the context of the z-axis, x-axis, or y-axis of the device. The material between the two other materials can be in contact with one or both of those materials, or can be separated from both of the other two materials by one or more intervening materials. Therefore, a material "between" two other materials can be in contact with either of the other two materials, or it can be coupled to the other two materials through an intervening material. The device between two other devices may be directly connected to one or two of those devices, or it may be separated from both of the other two devices by one or more intervening devices.Metal-insulator-metal (MIM) capacitors can be used in a variety of applications, such as decoupling capacitors in high-power microprocessor units, radio frequency circuits, and other analog integrated circuit devices. For example, decoupling capacitors provide a shunt path for transient currents in the circuit. Transient currents can often damage active electronic devices, such as transistors. The decoupling capacitor can also supply power to the integrated circuit and keep the power supply voltage stable. Decoupling capacitors do this by absorbing excess electrical energy (charge) flowing through the circuit. It is desirable that the decoupling capacitor has a large enough capacitance (for example, a capacitance above 8 microfarads/cm2) to control the excess power and provide a stable power supply voltage. When the insulator in the MIM capacitor has a high dielectric constant, a large capacitance can be obtained. Dielectric constants above 20 can be considered high. The typical dielectric constant of known dielectric materials (for example, oxides of hafnium, aluminum or zirconium) is in the range of 25-35. The leakage current of MIM capacitors using these dielectric materials is in the range of 10-6 to 10-3A/cm2. By reducing the thickness of one or more dielectric materials, the capacitance of MIM capacitors using one or more conventional dielectric materials can be increased. However, reducing the total thickness of one or more dielectric materials may result in an exponential increase in leakage current.By implementing a material having a dielectric strength substantially greater than 50, the capacitance in the MIM capacitor can be increased without reducing the thickness of the dielectric material. For example, increasing the capacitance can make the MIM capacitor absorb more energy during transient discharge. A class of materials with a high dielectric constant is called relaxor ferroelectrics. The dielectric permittivity (related to the dielectric constant) that a relaxor ferroelectric material has depends on the temperature of the material. The peak dielectric permittivity is close to the Curie temperature of the material.Relaxation ferroelectric materials have self-assembled magnetic domains of electric dipoles oriented in a specific direction. The magnetic domain has a short-range order in the range of 2 nm to 10 nm. In a magnetic domain with a short-range order in the range of 2 nm to 10 nm, the electric dipole can be easily redirected (or flipped) toward a desired direction by a weak externally applied electric field. The electric field can be applied by biasing the two electrodes of the MIM capacitor directly adjacent to the relaxing ferroelectric material. Compared to the magnitude of the electric field required to reorient the electric dipole in the ferroelectric material, the magnitude of the externally applied electric field is much smaller. Since the magnetic domains in ferroelectric materials are macroscopic in size (for example, on the order of a few microns), a larger field may be required.Another important characteristic of relaxor ferroelectric materials is that the peak dielectric constant exhibits a dependence on the frequency of the applied external electric field. Generally, the peak dielectric constant shifts and decreases as the frequency increases. The dependence of the peak dielectric constant on the frequency and temperature of the applied electric field enables relaxor ferroelectric materials to be used in a wide range of integrated circuit applications.According to an embodiment of the present disclosure, a capacitor device includes: a first electrode having a first metal alloy or metal oxide; a ferroelectric layer adjacent to the first electrode, wherein the ferroelectric layer includes lead, barium, manganese, zirconium, Two or more of titanium, iron, bismuth, strontium, neodymium, or niobium and oxygen; and a second electrode coupled to the ferroelectric layer, wherein the second electrode includes a second metal alloy or a second metal oxide. In an exemplary embodiment, the ferroelectric layer is a relaxed ferroelectric layer. In an embodiment, the capacitor is a planar MIM capacitor. In other embodiments, the capacitor is a trench capacitor, wherein the first electrode is adjacent to the sidewall of the via and on the substrate of the via, and wherein the ferroelectric layer is conformal to the first electrode, and further wherein the second electrode Conform to the ferroelectric layer. In some embodiments, the MIM capacitor may include a stack having two or more ferroelectric layers, where at least one ferroelectric layer is a relaxing ferroelectric layer. In some such embodiments, all layers in the stack are relaxed ferroelectric layers.FIG. 1A is an illustration of a cross-sectional view of a capacitor device 100A according to an embodiment of the present disclosure. The capacitor 100A includes a first electrode 102 having a first metal alloy or metal oxide and a ferroelectric layer 104 adjacent to the electrode 102. As shown in the figure, the ferroelectric layer 104 includes a relaxed ferroelectric material, and may be referred to as a relaxed ferroelectric layer 104. The relaxation ferroelectric layer 104 includes two or more of lead, barium, manganese, zirconium, titanium, iron, bismuth, strontium, neodymium, or niobium, and oxygen. The second electrode 106 is coupled with the relaxed ferroelectric layer 104, wherein the electrode 106 includes a second metal alloy or a second metal oxide. As shown in the figure, the capacitor device 100A is an example of a planar metal-insulator-metal device.In an embodiment, the relaxation ferroelectric layer 104 includes a combination of one of magnesium or zirconium and lead, niobium, and oxygen. In one embodiment, the relaxation ferroelectric layer 104 includes a perovskite compound having the chemical formula ABO3, where "A" is the first element, and "B" is the second element or compound. In one embodiment, where the element "A" is lead, the perovskite compound is substituted by the B position. In some such examples, the B-position substitution includes a combination of at least one of magnesium or zirconium and niobium. In an embodiment, the relaxed ferroelectric layer 104 includes PbMgxNb1-xO3 in which x is between 1/3 and 2/3 or PbZrxNb1-xO3 in which x is between 1/3 and 2/3.In other embodiments, the perovskite compound is doped with other compounds. For example, the relaxed ferroelectric layer 104 may include a first combination of Pb, Mg, Nb, and O, and a second combination of Pb, Ti, and O. In an exemplary embodiment, the atomic percentage of Mg and Nb in the relaxed ferroelectric layer 104 is greater than the atomic percentage of Ti in the relaxed ferroelectric layer 104. In an embodiment, the relative ratio of atoms with respect to each other corresponds to the relative ratio that will exist in a solid solution of [Y]PbMgxNb1-xO3-[Z]PbTiO3 (for example, the solid solution in the relaxor ferroelectric layer 104), where x Between 1/3 and 2/3, and where "Y" represents the concentration of PbMgxNb1-xO3 in the solid solution, and where "Z" represents the concentration of PbTiO3 in the solid solution. Depending on the embodiment, the concentration of PbMgxNb1-xO3 is at most 100% greater than the concentration of PbTiO3. In the embodiment, "Y" is 0.68, and "Z" is 0.32.In another embodiment, the dopant includes a combination of barium and titanium. In an embodiment, the relaxed ferroelectric layer 104 includes a first combination of Pb, Mg, Nb, and O, and a second combination of Ba, Ti, and O. In an exemplary embodiment, the atomic percentages of Pb, Mg, and Nb in the relaxation ferroelectric layer 104 are greater than the atomic percentages of Ba and Ti in the relaxation ferroelectric layer 104. The relative ratio of the atoms relative to each other corresponds to the relative ratio that will exist in the solid solution of [Y]PbMgxNb1-xO3[Z]BaTiO3 (for example, the solid solution in the relaxation ferroelectric layer 104), where x is between 1/3 and 2. /3, and where "Y" represents the concentration of PbMgxNb1-xO3 in the solid solution, and where "Z" represents the concentration of BaTiO3 in the solid solution. Depending on the embodiment, the concentration of PbMgxNb1-xO3 is at most 100% greater than the concentration of BaTiO3. In the embodiment, "Y" is 0.68, and "Z" is 0.32.In another embodiment, the dopant includes a combination of bismuth and iron. In an embodiment, the relaxed ferroelectric layer 104 includes a first combination of Pb, Mg, Nb, and O, and a second combination of Bi, Fe, and O. In an exemplary embodiment, the atomic percentages of Pb, Mg, and Nb in the relaxation ferroelectric layer 104 are greater than the atomic percentages of Bi and Fe in the relaxation ferroelectric layer 104. The relative ratio of atoms relative to each other corresponds to the relative ratio that will exist in [Y]PbMgxNb1-xO3[Z]BiFeO3 solid solution (for example, the solid solution in the relaxor ferroelectric layer 104), where x is between 1/3 and 2/ 3, and where "Y" represents the concentration of PbMgxNb1-xO3 in the solid solution, and where "Z" represents the concentration of BiFeO3 in the solid solution. Depending on the embodiment, the concentration of PbMgxNb1-xO3 is at most 100% greater than the concentration of BiFeO3.In another embodiment, the dopant includes a combination of bismuth and iron. In an embodiment, the relaxed ferroelectric layer 104 includes a first combination of Pb, Mg, Nb, and O, and a second combination of Pb, Zr, and O. In an exemplary embodiment, the atomic percentage of Pb, Mg, and Nb in the relaxation ferroelectric layer 104 is greater than the atomic percentage of Zr in the relaxation ferroelectric layer 104. The relative ratio of the atoms relative to each other corresponds to the relative ratio that will exist in the solid solution of [Y]PbMgxNb1-xO3[Z]PbZrO3 (for example, the solid solution in the relaxor ferroelectric layer 104), where x is between 1/3 and 2. /3, and where "Y" represents the concentration of PbMgxNb1-xO3 in the solid solution, and where "Z" represents the concentration of PbZrO3 in the solid solution. Depending on the embodiment, the concentration of PbMgxNb1-xO3 is at most 100% greater than the concentration of PbZrO3.In some embodiments, the relaxed ferroelectric layer 104 includes a combination of PbMgxNb1-xO, BaTiO3, PbTiO3, and BiFeO3. The fractional volume of each compound in the relaxation ferroelectric layer 104 may be substantially the same or different.In other embodiments, the relaxation ferroelectric layer 104 includes bismuth, sodium, titanium, and oxygen-based perovskite compounds. In some such embodiments, the relaxed ferroelectric layer 104 further includes two or more of barium, potassium, tantalum, antimony, zirconium, tin, or niobium.In the first example, the relaxed ferroelectric layer 104 includes a combination of Bi, Na, Ti, and O, such as Bi0.5Na0.5TiO3.In other embodiments, the relaxed ferroelectric layer 104 includes a first combination of Bi, Na, Ti, and O and a second combination of dopants (and in some embodiments, a third combination). In the second example, the relaxed ferroelectric layer 104 includes a first combination of Bi, Na, Ti, and O, a second combination of Ba, Ti, and O, and a third combination of K, Nb, Na, and O. In one such embodiment, the relative ratio of the atoms relative to each other corresponds to the solid solution that will be present in (1-x)Bi0.5Na0.5TiO3-(x)BaTiO3 (e.g., the solid solution in the relaxor ferroelectric layer 104) The relative ratio in-for example, where x is between 0 and 0.1.In the third example, the relaxed ferroelectric layer 104 includes a first combination of Bi, Na, Ti, and O, a second combination of Ba, Ti, and O, and a third combination of K, Nb, Na, and O. In one such embodiment, the relative ratio of the atoms with respect to each other corresponds to the solid solution that will exist in (1-xy)Bi0.5Na0.5TiO3-(x)BaTiO3-(y)K0.5Na0.5NbO3 (e.g., relaxation The relative ratio of the solid solution in the ferroelectric layer 104)—for example, where x is between 0 and 0.1, and y is between 0 and 0.1.In the fourth example, the relaxed ferroelectric layer 104 includes a first combination of Bi, Na, Ti, and O, a second combination of Ba, Ti, and O, and a metal "M" (for example, Nb, Ta, or Sb). The third combination. The relative ratio of the atoms with respect to each other corresponds to the solid solution that will exist in (1-x)(Bi0.5Na0.5)-(x)BaTiO3-(y)M2O5 (for example, the solid solution in the relaxor ferroelectric layer 104) The relative ratio of, where x is between 0 and 0.1, and y is between 0 and 0.1.In the fifth example, the relaxed ferroelectric layer 104 includes a first combination of Bi, Na, and K, and a second combination of Ti, Sn, and O. In one such embodiment, the relative ratio of atoms relative to each other corresponds to the solid solution that will be present in Bi0.5(Na0.75K0.25)0.5(Ti1-xSnx)O3 (e.g., in the relaxation ferroelectric layer 104 The relative ratio in solid solution)—for example, where x is 0, 0.02, 0.05, or 0.08.In the sixth example, the relaxed ferroelectric layer 104 includes a first combination of Bi, Na, Ti, and O, a second combination of Bi, K, Ti, and O, and a third combination of K, Na, Nb, and O. In one such embodiment, the relative ratio of the atoms relative to each other corresponds to the solid solution that will exist in (1-xy)Bi0.5Na0.5TiO3-xBi0.5K0.5TiO3-yK0.5Na0.5NbO3 (e.g., relaxation iron The solid solution in the electrical layer 104)—for example, where x is between 0 and 0.2, and y is between 0 and 0.1.In the seventh example, the relaxed ferroelectric layer 104 includes a first combination of Bi, Na, Ti, and O, a second combination of Ba, Ti, and O, and a third combination of Sr, Ti, and O. In one such embodiment, the relative ratio of the atoms to each other corresponds to the solid solution that will exist in (1-xy)Bi0.5Na0.5TiO3-(x)BaTiO3-(y)SrTiO3 (e.g., relaxor ferroelectric layer) 104 in the solid solution)—for example, where x is between 0 and 0.1, and y is between 0 and 1.In the eighth example, the relaxed ferroelectric layer 104 includes a first combination of Bi, Na, and K, Ti and O, and a second combination of Ba, Zr, and O. In one such embodiment, the relative ratio of the atoms to each other corresponds to the solid solution (e.g., relaxation) that will exist in (1-x)(Bi0.5(Na0.82K0.18)0.5TiO3)-(x)BaZrO3 The solid solution in the ferroelectric layer 104)—for example, where x is between 0 and 0.05.In the ninth example, the relaxed ferroelectric layer 104 includes a first combination of Bi, Na, Ti, and O and a second combination of K, Na, Nb, and O. In one such embodiment, the relative ratio of the atoms with respect to each other corresponds to the solid solution that will exist in (1-x)Bi0.5Na0.5TiO3-(x)K0.5Na0.5NbO3 (e.g., relaxor ferroelectric layer 104 The relative ratio in the solid solution)-for example, where x is between 0 and 0.01.In an embodiment, the relaxed ferroelectric layer 104 includes a first combination of K, Nb, and O, and a second combination of Nb, Na, and O. In one such embodiment, the relative ratio of atoms relative to each other corresponds to the relative ratio that will exist in a solid solution of (1-x)KNbO3-(x)NaNbO3 (eg, a solid solution in the relaxor ferroelectric layer 104) ——For example, where x is 0.5.In an embodiment, the relaxed ferroelectric layer 104 includes a combination of K, Na, Nb, and O. In one such embodiment, the relative ratio of atoms with respect to each other corresponds to the relative ratio that will exist in a solid solution of KxNa1-xNbO3 (e.g., a solid solution in the relaxor ferroelectric layer 104)-for example, where x is 0.5 .In another embodiment, the relaxation ferroelectric layer 104 includes a combination of barium oxide, titanium oxide, and neodymium oxide, such as BaO-TiO2-Nd2O3.In an embodiment, the relaxed ferroelectric layer 104 has a thickness between 5 nm and 50 nm. The thickness between 5nm and 50nm makes the leakage current during operation sufficiently low, for example, the leakage current is less than 10-6A/cm2.The electrodes 102 and 106 each include a metal, a metal alloy, or a conductive metal oxide. The electrode 102 may have the same or different material composition as the material of the electrode 106. The electrodes 102 and 106 may each have substantially similar work function values. In an embodiment, the electrode 102 includes a metal such as Ru, Al, Cu, W, Pt, Ir, Co, Au, Ti, or Ta, or a metal such as SrRuO3, Ba0.5Sr0.5RuO3, RuOx, IrOx, TiOx, or TaOx Oxide. In an embodiment, the electrode 106 includes metals such as Ru, Al, Cu, W, Pt, Ir, Co, Au, Ti, or Ta, or metals such as SrRuO3, Ba0.5Sr0.5RuO3, RuOx, IrOx, TiOx, or TaOx Metal oxide.Depending on the application, the electrode 102 may have a thickness between 20 nm and 50 nm, and the electrode 106 may have a thickness between 20 nm and 50 nm. The thickness of the electrode 102 may be substantially the same as or different from the thickness of the electrode 106.In other examples, the electrode 102 may include multiple layers forming a stacked electrode 102, such as two or three layers.FIG. 1B shows a cross-sectional view of a capacitor device 100B according to an embodiment of the present disclosure. The capacitor device 100B includes a relaxed ferroelectric layer 104 and an electrode 102 including three layers. In the illustrative embodiment, the electrode 102 includes a first electrode layer 102A, a second electrode layer 102B on the first electrode layer 102A, and a third electrode layer 102C on the electrode layer 102B. In one embodiment, the electrode layer 102A includes tantalum, the electrode layer 102B includes ruthenium, and the electrode layer 102C includes iridium. In another embodiment, the electrode layer 102A includes tantalum, the electrode layer 102B includes iridium, and the electrode layer 102C includes ruthenium.In an embodiment, the electrode layer 102A has a thickness between 1 nm and 10 nm, the electrode layer 102B has a thickness between 5 nm and 20 nm, and the electrode layer 102C has a thickness between 5 nm and 20 nm. In an embodiment, the combined thickness of the electrode layers 102A, 102B, and 102C is between 20 nm and 50 nm.In other embodiments, the MIM capacitor may include more than one ferroelectric layer.FIG. 1C shows a cross-sectional view of a capacitor 100C including a ferroelectric stack 108 between the electrodes 102 and 106. As shown in the figure, the ferroelectric stack 108 is a double layer, which includes a ferroelectric layer 104 and a ferroelectric layer 110 on the ferroelectric layer 104. The ferroelectric layer 104 is directly adjacent to and coupled to the electrode 102, and the ferroelectric layer 110 is directly adjacent to the ferroelectric layer 104 and the electrode 106 and is located between the ferroelectric layer 104 and the electrode 106. In an embodiment, the ferroelectric layer 104 includes a relaxed material and is referred to as a relaxed ferroelectric layer 104.Depending on the application, the ferroelectric layer 110 may include relaxed or non-relaxed materials. In an embodiment, the ferroelectric layer 110 includes a relaxing material. Examples of the relaxation material include two or more of lead, barium, manganese, zirconium, titanium, iron, bismuth, neodymium, strontium, potassium, or niobium, and oxygen. When the ferroelectric layer 110 includes a relaxing material, the material of the relaxing ferroelectric layer 110 is different from the material of the relaxing ferroelectric layer 104. When the ferroelectric layer 110 includes a relaxing material, the ferroelectric stack 108 is a relaxing ferroelectric stack 108.The individual thickness of the relaxation ferroelectric layer 104 and the relaxation ferroelectric layer 110 may be in the range between 2.5 nm and 47.5 nm. In an embodiment, both the relaxation ferroelectric layer 104 and the relaxation ferroelectric layer 110 may be in a range between 2.5 nm and 47.5 nm, and the thickness of the relaxation ferroelectric stack 108 is between 5 nm and 50 nm.In other embodiments, the ferroelectric layer 110 includes a non-relaxing material, such as a hafnium oxide compound that is and doped with one or more of Zr, Al, Si, N, Y, or La. In an embodiment where the ferroelectric layer 110 does not include a relaxing material, the ferroelectric layer 110 has a thickness between 1 nm and 5 nm, and the relaxing ferroelectric layer 104 has a thickness between 2 nm and 10 nm. In some such embodiments, the combined thickness of the ferroelectric layer 110 and the relaxation ferroelectric layer 104 is between 5 nm and 50 nm.Depending on the choice of material, the dielectric constant ratio between the relaxation ferroelectric layer 104 and the ferroelectric layer 110 is between 2 and 110. In an example in which the ferroelectric layer 110 includes a hafnium oxide compound doped with one or more of Zr, Al, Si, N, Y, or La, the ferroelectric layer 110 has a dielectric between 20 and 50 Constant, and the relaxation ferroelectric layer 104 has a dielectric constant between 100-2200.In an embodiment, when the relaxation ferroelectric stack 108 includes a double-layer stack such as described above, the double-layer stack may be covered with a third ferroelectric material to introduce symmetry in the MIM capacitor.1D shows a cross-sectional view of a capacitor device 100D including a multilayer ferroelectric stack 112 according to an embodiment of the present disclosure, wherein the multilayer ferroelectric stack 112 further includes a ferroelectric stack 108 covered by a ferroelectric layer 114 .In an embodiment, the ferroelectric stack 108 includes a first relaxor ferroelectric layer containing two or more of lead, barium, manganese, zirconium, titanium, iron, bismuth, neodymium, potassium, or niobium, and oxygen 104 or one of the first non-relaxed ferroelectric layer 104. In some such embodiments, the ferroelectric stack 108 further includes a second relaxed ferroelectric layer 110 or a second non-relaxed ferroelectric layer 110 on one of the first relaxed ferroelectric layer 104 or the first non-relaxed ferroelectric layer 104 One of the ferroelectric layers 110 is relaxed. In an exemplary embodiment, the multilayer ferroelectric stack 112 includes at least one layer having a relaxed ferroelectric material, but is symmetrical about the ferroelectric layer 110 in material composition. In some such embodiments, the ferroelectric layer 114 may include a relaxed or non-relaxed material. In one configuration, the multilayer ferroelectric stack 112 includes a relaxation ferroelectric layer 104, a relaxation ferroelectric layer 110, and a relaxation ferroelectric layer 114. In the second embodiment, the multilayer ferroelectric stack 112 includes a relaxed ferroelectric layer 104, a non-relaxed ferroelectric layer 110, and a relaxed ferroelectric layer 114. In the third embodiment, the multilayer ferroelectric stack 112 includes a non-relaxing ferroelectric layer 104, a relaxing ferroelectric layer 110, and a non-relaxing ferroelectric layer 114.In some embodiments, when the ferroelectric layer 114 includes a relaxing material, the material of the relaxing ferroelectric layer 114 is the same as the material of the relaxing ferroelectric layer 104 to introduce symmetry in the capacitor device 100D. Examples of the relaxation material include two or more of lead, barium, manganese, zirconium, titanium, iron, bismuth, neodymium, strontium, or niobium, and oxygen. In other embodiments, when the ferroelectric layer 114 includes a relaxing material, the material of the relaxing ferroelectric layer 114 is different from the material of the relaxing ferroelectric layer 104. From an operational point of view, the characteristics of the ferroelectric stack 108 including the relaxor ferroelectric material and the non-relaxable ferroelectric material are controlled by the nature of the relaxor ferroelectric material.In some embodiments, the thickness of the multilayer ferroelectric stack 112 is between 5 nm and 50 nm. In an exemplary embodiment, the layers 104, 110, and 114 with a relaxed material are thicker than the layers 104, 110, and 114 with a non-relaxed material.In yet another embodiment, the multilayer stack 112 is a superlattice structure including a plurality of ferroelectric stacks 108.FIG. 1E is an illustration of a cross-sectional view of a capacitor device 100E according to an embodiment of the present disclosure, in which the multilayer stack 116 is a superlattice structure. As shown in the figure, the multilayer stack 116 includes a plurality of ferroelectric stacks 108 covered by a ferroelectric layer 114. The ferroelectric layer 114 may include the relaxed material described in connection with FIG. 1D. Referring again to FIG. 1E, in some embodiments, the material composition of each ferroelectric stack 108 may be substantially the same, where each relaxor ferroelectric layer 104 and each other relaxor ferroelectric layer in the multilayer stack 116 104 is the same, and where each ferroelectric layer 110 is the same as every other ferroelectric layer 110 in the multilayer stack 116. In other such embodiments, each ferroelectric layer 110 is a relaxed ferroelectric layer 110 having a material composition that is significantly different from the material composition of the relaxed ferroelectric layer 104.In other embodiments, the relaxed ferroelectric layer 104 in one ferroelectric stack 108 is different from the relaxed ferroelectric layer 104 in each of the remaining ferroelectric stacks 108. In some such embodiments, the ferroelectric layer 110 in one ferroelectric stack 108 is different from the ferroelectric layer 110 in each of the remaining ferroelectric stacks 108.In some examples, the ferroelectric layer 114 includes the same or substantially the same material as the relaxed ferroelectric layer 104 directly adjacent to the electrode 102. In other embodiments, the ferroelectric layer 114 includes the same or substantially the same material as the relaxation ferroelectric layer 104 in one or more of the ferroelectric stacks 108.In other examples, each ferroelectric stack 108 in the superlattice multilayer stack 116 includes a non-relaxed ferroelectric layer 104 and a relaxed ferroelectric layer 110 on the non-relaxed ferroelectric layer 104. In some such embodiments, the ferroelectric layer 114 is a non-relaxing ferroelectric layer 114 to provide symmetry in the capacitor device 100E.In an embodiment, the multilayer stack 116 includes any one from 1-10 ferroelectric stacks 108. In an embodiment, the material layer stack 116 has a thickness between 5 nm and 50 nm, wherein the ferroelectric layer 114 has a thickness of at least 1 nm. In one embodiment, in the case where the number of ferroelectric stacks 108 is greater than one, the thickness of each ferroelectric stack 108 is substantially the same, but the total combination of each of the ferroelectric stacks 108 and the ferroelectric layer 114 The thickness is between 5nm and 50nm. In some such embodiments, the combined thickness of the ferroelectric stack 108 in the multilayer stack 116 is between 4 nm and 49 nm.In some examples where the number of ferroelectric stacks 108 is greater than one, each ferroelectric stack 108 may have a different thickness depending on the material composition of each constituent layer 104 and 110. In some such embodiments, the total combined thickness of each of the ferroelectric stacks 108 and the ferroelectric layer 114 is between 5 nm and 50 nm.FIG. 2A shows a graph of the electrical polarization and voltage characteristics (hysteresis effect) of the relaxor ferroelectric material and the ferroelectric material. In order to determine the electrical polarization characteristics of the ferroelectric layer 104, in a stacked body such as that shown in FIG. 2B, a voltage is applied between a pair of electrodes 102 and 106 directly adjacent to the opposite side of the ferroelectric layer 200. As shown in the figure, a time-varying voltage with a changing voltage polarity generates an oscillating electric field between the electrodes 102 and 106. In one embodiment, the ferroelectric layer 200 includes a relaxing ferroelectric material, and in the second embodiment, the ferroelectric layer 200 includes a non-relaxing material. The relaxed and non-relaxed materials in the ferroelectric layer 200 are substantially the same as the relaxed and non-relaxed materials in the ferroelectric layer 110 described above in connection with FIG. 1B. Referring again to FIG. 2A, the hysteresis loop 201 indicates the P-V characteristic of the non-relaxing material. The degree of polarization in the non-relaxed material determines the electric field required to switch the polarization direction in the non-relaxed material. In an embodiment, the polarization changes direction at an electric field corresponding to the voltage VF in the ferroelectric layer 200 with a non-relaxing material (a positive value becomes a negative value, or vice versa). The electric field corresponding to the voltage VF is the "coercive field" of the non-relaxing material.The hysteresis loop 202 indicates the P-E characteristics of the non-relaxing material. The polarization changes direction at the electric field corresponding to the voltage VRF in the relaxing material (a positive value becomes a negative value, or vice versa). The electric field corresponding to the voltage VRF is the "coercive field" of the relaxed material. On the contrary, for the P-V characteristic of the non-relaxed material, the P-V characteristic of the ferroelectric layer 200 with the relaxed material has a coercive field that is substantially smaller than that of the non-relaxed material. VRF is substantially smaller than VF because the smaller magnetic domains in relaxed materials respond quickly to weaker electric fields compared to larger magnetic domains in non-relaxed materials. Moreover, the ferroelectric layer 200 with a relaxed material basically suppresses the hysteresis effect, and is therefore more suitable for decoupling applications.Although planar MIM capacitors have been described so far, in other embodiments, the MIM capacitor device has a non-planar geometry. An example of a non-planar geometry is a trench capacitor, where trenches can exist at various levels within an integrated circuit and can be laterally adjacent to both conductive and non-conductive materials.FIG. 3 shows a cross-sectional view of trench capacitor 300. In the illustrative embodiment, the trench capacitor 300 includes a relaxed ferroelectric layer 104 and a ferroelectric layer 110. The trench capacitor 300 is adjacent to the dielectric 302 and coupled to the via electrode 304 and the via electrode 306.In the illustrative embodiment, the electrode 102 has a portion directly adjacent to the dielectric 302. As shown in the figure, the electrode 102 has sidewalls 102D and 102E directly adjacent to the dielectric 302. The side walls 102D and 102E may be substantially vertical or tapered. In the illustrative embodiment, the side walls 102D and 102E are tapered. In other embodiments, the side walls 102D and 102E are substantially vertical. As shown in the figure, a part of the electrode surface 102F is directly on the via electrode 304 and a part is on the dielectric 308 adjacent to the via electrode 304. In an embodiment, the lowermost part of the electrode 102 has a width WE1 that is wider than the maximum lateral width WV2 of the via electrode 304. In other embodiments, WE1 is smaller than WV2. In some embodiments, the via electrode 304 is a line extending along the X axis in the figure. In some such embodiments, the electrode 102 is not adjacent to the dielectric 308. Due to the geometry of the capacitor 300, the electrodes 102 may have substantially the same or different thicknesses from each other. In an embodiment, the electrode 102 has a lateral thickness TE between 5 nm and 50 nm and a vertical thickness TVE between 5 nm and 50 nm.As shown in the figure, the relaxed ferroelectric layer 104 is directly adjacent to the electrode 102, and the shapes of the relaxed ferroelectric layer 104 and the electrode 102 are substantially conformal. As shown in the figure, the relaxed ferroelectric layer 104 has a lateral thickness TF1 between 5 nm and 15 nm and a vertical thickness TVF1 between 5 nm and 15 nm. In some embodiments, TF1 is substantially the same as TVF1. In other embodiments, the difference between TF1 and TVF1 is at most 15%.As shown in the figure, the ferroelectric layer 110 is directly between the electrode 106 and the relaxation ferroelectric layer 104. In some embodiments, depending on the application, the ferroelectric layer 110 includes a relaxing ferroelectric material, and in other embodiments, the ferroelectric layer 110 includes a non-relaxing ferroelectric material. As shown in the figure, the ferroelectric layer 110 has a lateral thickness TF2 between 5 nm and 50 nm and a vertical thickness TVF2 between 5 nm and 50 nm. In some embodiments, TF2 is substantially the same as TVF2. In other embodiments, the difference between TF2 and TVF2 is at most 15%. The changes in the lateral and vertical thicknesses of the electrode 102, the relaxed ferroelectric layer 104, and the ferroelectric layer 110 can be attributed to the process used to manufacture the capacitor 300.In the illustrative embodiment, the electrode 106 has a shape that is affected by the shape of each of the electrode 102, the relaxation ferroelectric layer 104, and the ferroelectric layer 110. As shown in the figure, the electrode 106 has a trapezoidal shape. The upper part of the electrode 106 has a lateral width WE2. As shown in the figure, WE2 is greater than the maximum lateral width WV2 of the lowermost portion of the via electrode 306 that is in direct contact with the electrode surface 106A. WE2 is greater than WV2 to prevent an electrical short circuit between the via electrode 306 and the electrode 102.In an embodiment, each of the dielectric 302, the dielectric 308, and the dielectric 310 (adjacent to the via electrode 306) includes the same material. In an embodiment, each of the dielectric 302, the dielectric 308, and the dielectric 310 includes a material such as, but not limited to, silicon dioxide, silicon nitride, silicon carbide, or carbon-doped silicon oxide. In one embodiment, each of the dielectric 302, the dielectric 308, and the dielectric 310 includes materials different from each other. In another embodiment, any two dielectrics 302, 308, or 310 include the same or substantially the same material.In other examples, the geometry of the trench capacitors of the capacitors 100A, 100C, 100D, and 100E may all be the same or substantially the same as the geometry of the trench capacitor 300.In other examples, the dielectric 302 may be replaced by a conductive material used to form interconnects. In such an example, two or more capacitors that are adjacent to each other but far away from each other may be electrically coupled through the conductive material.4A shows a cross-sectional view of conductive layer 400 adjacent to a pair of trench capacitors such as trench capacitor 402 and trench capacitor 404. Depending on the application, the conductive layer may include tantalum or titanium and ruthenium nitride, tungsten.In some embodiments, such as the illustrated embodiment, the conductive layer 400 laterally surrounds the trench capacitor 402 and the trench capacitor 404. As shown, the trench capacitor 402 and the trench capacitor 404 are embedded in the conductive layer 400, where the conductive layer 400 is in direct contact with the lowermost trench capacitor surfaces 402A and 404A. The conductive layer 400 may be electrically coupled from above the surface 400A or from below the surface 400A. In the illustrative embodiment, the conductive layer 400 is electrically coupled through a conductive via 406 above the conductive layer 400.In the illustrative embodiment, when the conductive layer 400 is energized, the electrode 102 of the trench capacitor 402 is electrically coupled with the electrode 102 of the trench capacitor 404 through the conductive layer 400.In an embodiment, as shown, the trench capacitor 402 is coupled to the conductive via 408 and the trench capacitor 404 is coupled to the via 410. The conductive via 408 only contacts the upper portion of the electrode 106 of the trench capacitor 402, and the conductive via 410 only contacts the upper portion of the electrode 106 of the trench capacitor 404. In such a configuration, each trench capacitor 402 and 404 can be individually charged or discharged during operation.In other embodiments, there may be a conductive bridge that provides electrical coupling between the electrodes 106 in each of the trench capacitors 402 and 404. The dashed frame 411 indicates the outline of the conductive bridge directly on each of the conductive vias 408 and 410 and coupled with each of the conductive vias 408 and 410.In other examples, the conductive layer 400 may only exist directly between the trench capacitor 402 and the trench capacitor 404 and directly below the trench capacitor surfaces 402A and 404A. In some such embodiments, conductive layer 400 does not extend laterally beyond trench capacitor surfaces 402A and 404A. In some other such embodiments, the dielectric is present adjacent to the trench capacitor sidewalls 402B and 404B, and the conductive via 406 is located laterally between the conductive via 408 and the conductive via 410. In such an embodiment, the conductive bridge connecting the conductive via 408 and the conductive via 410 may exist on a plane behind the plane shown in the cross-sectional view of FIG. 4A.In the configuration shown, the size and composition of each trench capacitor 402 and 404 are substantially the same. In an embodiment, the trench capacitor 402 and the trench capacitor 404 include one or more materials of the trench capacitor 300 described in conjunction with FIG. 3. Referring again to FIG. 4A, in other embodiments, the materials of each of the electrode 102, the relaxed ferroelectric layer 104, the ferroelectric layer 110, and the electrode 106 in the trench capacitor 402 and the associated lateral and vertical thicknesses may be corresponding The ground is different from the materials of the electrode 102, the relaxation ferroelectric layer 104, the ferroelectric layer 110, and the electrode 106 in the trench capacitor 404 and the associated lateral and vertical thicknesses.In other embodiments, capacitors 402 and 404 have multiple layers of relaxed and non-relaxed materials, and electrode materials such as those described above in connection with FIGS. 1B, 1D, and 1E.FIG. 4B shows a plan view of the trench capacitors 402 and 404 along the line A-A' in FIG. 4A. Referring again to FIG. 4B, each of the trench capacitors 402 and 404 has a circular plan view shape as shown. In an embodiment, the electrode 102, the relaxation ferroelectric layer 104, and the ferroelectric layer 110 are arranged concentrically. In another embodiment, each of the trench capacitors 402 and 404 has a substantially rectangular plan view shape as indicated by the dashed lines 414A and 414B.4C shows a cross-sectional view of a pair of trench capacitors such as trench capacitor 402 and trench capacitor 404 separated by a conductive layer 400 including copper according to an embodiment of the present disclosure. In applications where the conductive layer 400 includes copper, the shape of the two or more trench capacitors may be substantially different from the trapezoidal shape of the trench capacitors 402 and 404 shown in FIG. 4A. Referring again to FIG. 4C, such a difference in the shape of the trench capacitors 402 and 404 can be attributed to the process for implementing copper in the conductive layer 400. In the illustrative embodiment, the trench capacitor sidewalls 402B and 402C and the trench capacitor sidewalls 404B and 404C are substantially perpendicular to the conductive layer surface 400A.Depending on the embodiment, the conductive layer 400 may be above the layer 416 which is a conductor or an insulator. In an embodiment, the layer 416 is a conductor and includes copper or any other conductive material, such as, but not limited to, tungsten, titanium or tantalum nitride, ruthenium, tungsten, titanium, tantalum. In other embodiments where layer 416 includes a conductive material other than copper and layer 400 includes copper, surface 400A may extend below the lowermost trench capacitor surfaces 402A and 404A, as will be discussed further below. In other embodiments, the layer 416 is an insulator and includes the same or substantially the same material as the dielectric 302 described in conjunction with FIG. 3.Referring again to FIG. 4C, in an illustrative embodiment, the conductive layer 400 is electrically coupled through the via 418 on the conductive layer 400. As shown in the figure, the conductive via 420 only contacts the upper portion of the electrode 106 of the trench capacitor 402, and the conductive via 422 only contacts the upper portion of the electrode 106 of the trench capacitor 404. In such a configuration, each trench capacitor 402 and 404 can be individually charged or discharged during operation.In other embodiments, there may be a conductive bridge that provides electrical coupling between each trench capacitor 402 and the electrode 106 of the trench capacitor 404. The dashed frame 426 indicates the outline of the conductive bridge directly on each of the conductive vias 420 and 422 and coupled with each of the conductive vias 420 and 422. The conductive bridge may be on a plane behind the plane shown in the cross-sectional view of FIG. 4C.In other embodiments, capacitors 402 and 404 have multiple layers of relaxed and non-relaxed materials, and electrode materials such as those described above in connection with FIGS. 1B, 1D, and 1E.FIG. 5 shows a flowchart of a method for manufacturing a capacitor 500. The method 500 begins at operation 510 by forming a first conductive interconnect layer over the substrate. The method continues at operation 520, forming a dummy structure on the transition layer. At operation 530, the method 500 involves forming a second conductive layer adjacent to the dummy structure. At operation 540, the method 500 involves removing the dummy structure to create an opening. The method continues at operation 550, forming a capacitor in each of the openings. The method ends in operation 560, forming a dielectric over the capacitor and patterning the dielectric to form openings and forming conductive vias in each of the openings.FIG. 6A shows a substrate 600 and a first transition layer 602 formed above the substrate 600. As shown in FIG. In an embodiment, the material of the transition layer 602 is the same as or substantially the same as the material of the layer 416 described above. In one embodiment, the transition layer 602 includes copper deposited on the substrate 600 by an electroplating method. A layer of adhesion material such as tantalum or ruthenium may be deposited on the substrate 600 before the copper deposition.In another embodiment, the transition layer 602 includes NdScO and provides a suitable surface for nucleation of the deposited electrode layer in further operations.In an embodiment, the substrate 600 includes a suitable semiconductor material, such as but not limited to single crystal silicon, polycrystalline silicon, and silicon-on-insulator (SOI). In another embodiment, the substrate 600 includes other semiconductor materials, such as germanium, silicon germanium, or suitable III-N or III-V compounds. In another embodiment, the substrate 600 includes NdScO3. Logic devices such as MOSFET transistors and access transistors may be formed on the substrate 600. In some embodiments, an integrated circuit including a transistor is formed between the transition layer 602 and the substrate 600.FIG. 6B shows the structure of FIG. 6A after the dummy structures 604 and 606 are formed on the transition layer 602 according to an embodiment of the present disclosure. In an embodiment, the dielectric material is deposited on the transition layer 602 in a thick thickness. The dielectric material may be formed by forming a mask on the dielectric material by a photolithography technique and then etching the dielectric material to form the dummy structures 604 and 606. In some embodiments, the selection of dielectric materials may be limited to those dielectric materials that do not require any corrosive gas (e.g., chlorine or bromine gas) for patterning. In some such embodiments, the dielectric material may include the same or substantially the same material as the dielectric 302 discussed in connection with FIG. 3. Referring again to FIG. 6B, in an example, where the transition layer 602 does not include copper, the dummy structures 604 and 606 may each include a material such as polysilicon that can be implemented for easy patterning. In one such embodiment, the uppermost surface 602A of the transition layer 602 may be recessed below the lowermost dummy structure surfaces 604A and 606A during patterning to form the dummy structure. It should be understood that the sidewall profiles of the dummy structures 604 and 606 may be substantially perpendicular to the uppermost surface 602A. In other embodiments, during the etching process, the sidewall profile of the dummy structures 604 and 606 may be gradually tapered relative to the uppermost surface 602A. In some such embodiments, tapering causes the base of the lower surfaces 604A and 606A of the dummy structure to be larger than the upper surfaces 604B and 606B of the dummy structure, respectively.6C shows the structure of FIG. 6B after the conductive interconnection layer 608 is formed on the transition layer 602 adjacent to the dummy structures 604 and 606. In an embodiment, the conductive interconnect layer 608 includes copper or any other conductive material, such as a nitride of tungsten, titanium or tantalum, ruthenium, tungsten, titanium or tantalum. The conductive interconnection layer 608 may be deposited on the surface of the transition layer 602 and on the dummy structures 604 and 606 in a uniform thickness. After the deposition of the conductive interconnection layer 608, a planarization process is performed to remove the excess conductive interconnection layer 608 from above the dummy structures 604 and 606. The planarization exposes the upper surfaces 604B and 606B. In an embodiment, the planarization process includes a chemical mechanical polishing (CMP) process. In embodiments where the conductive interconnect layer 608 includes copper, a liner layer including tantalum, ruthenium, or copper may be deposited before the copper conductive interconnect layer 608 is deposited. In one such embodiment, a liner layer is deposited on the transition layer 602 and on the sidewalls of the dummy structures 604 and 606. The liner layer is also deposited on surfaces 604B and 606B, but is removed during the planarization process.FIG. 6D shows the structure of FIG. 6C after removing the dummy structures 604 and 606 to form the first opening 610 and the second opening 612. In an embodiment, a wet chemical process or a plasma etching process may be used to selectively remove materials of the dummy structures 604 and 606 without damaging the conductive interconnection layer 608.FIG. 6E shows the structure of FIG. 6D after forming the trench capacitor 614 in the opening 610 and the trench capacitor 616 in the opening 612 according to an embodiment of the present disclosure. In an embodiment, the materials of the electrode 102, the relaxation ferroelectric layer 104, the ferroelectric layer 110, and the electrode 106 are sequentially deposited into the openings 610 and 612, and then subsequently planarized. In an exemplary embodiment, the relaxed ferroelectric layer 104 includes a relaxed ferroelectric material. The relaxation ferroelectric layer 104 and the ferroelectric layer 110 may be deposited by a physical vapor deposition (PVD) process, a molecular beam epitaxy (MBE), or an atomic layer deposition (ALD) process at a processing temperature lower than 600 degrees Celsius. In other embodiments, the superlattice structure can be realized by a lamination or co-current deposition process. In one embodiment, the relaxor ferroelectric layer 104 includes a combination of one of magnesium or zirconium, lead, niobium, and oxygen.The deposition process includes forming the material for the electrode 102 on the transition layer 602 and on the sidewalls and the uppermost surface 608A of the conductive interconnect layer 608. In an embodiment, when the material used for the electrode 102 includes Ba, Sr, Ru, and O, the epitaxial transition layer 602 including NdScO may provide a suitable surface for the nucleation of the material of the electrode 102.The material of the electrode 102 may be deposited by chemical vapor deposition (CVD), or plasma enhanced chemical vapor deposition (PECVD), physical vapor deposition (PVD) process, molecular beam epitaxy (MBE), or atomic layer deposition (ALD) process. The process continues to form the material of the relaxation ferroelectric layer 104 on the material of the electrode 102, and form the material of the ferroelectric layer 110 on the relaxation ferroelectric layer 104. The deposition process ends with forming a material for the electrode 106 on the material of the ferroelectric layer 110. The material of the electrode 106 may be deposited by chemical vapor deposition (CVD), or plasma enhanced chemical vapor deposition (PECVD), physical vapor deposition (PVD) process, molecular beam epitaxy (MBE), or atomic layer deposition (ALD) process. In an embodiment, the planarization process includes a CMP process and removal of material deposited on the uppermost surface 608A.FIG. 6F shows the structure of FIG. 6E after the dielectric layer 618 is formed on the conductive interconnect layer surface 608A and on the uppermost surfaces 614A and 616A of the trench capacitor. In an embodiment, the dielectric 618 includes any material having sufficient dielectric strength to provide electrical isolation, such as but not limited to silicon dioxide, silicon nitride, silicon carbide, or carbon-doped silicon oxide.A mask 626 may be formed on the dielectric. The mask can be formed by a photolithography process. The mask defines the position of the opening to be formed after the dielectric 618 is patterned. In an embodiment, the openings 620, 622, and 624 are formed in the dielectric material 618 using a plasma etching process. In other embodiments, the opening 620 may not be formed adjacent to the opening 622. After the formation of the openings 620, 622, and 624, the mask 626 is removed.FIG. 6G shows the structure of FIG. 6F after the via electrodes 628, 630, and 632 are formed. In an embodiment, the material of the via electrodes 628, 630, and 632 is substantially the same as the material of the via electrode 306 described in conjunction with FIG. 3. Referring again to FIG. 6G, the materials for the via electrodes 628, 630, and 632 are deposited in the openings 620, 622, and 624, on the conductive interconnect layer 608, and deposited on each of the trench capacitors 614 and 616 On the electrode layer 106. After deposition, the materials used for the via electrodes 628, 630, and 632 are planarized. The planarization process removes any materials for via electrodes 628, 630, and 632 that are deposited over the dielectric surface 618A.Although the operations associated with the methods of FIGS. 6A-6G are described as forming the trench capacitor shown in FIG. 4C, operations in combination with one or more of the operations described above may be used to form the trench capacitor shown in FIG. 4A .FIG. 7A shows a circuit diagram 700 showing the coupling between a pair of capacitors. The decoupling capacitor depicted in this figure may include any of the aforementioned capacitors 100A, 100B, 100C, 100D, 100E, 300, and 402. In the embodiment, the terminals A and B of the capacitors C1 and C2 are respectively connected at the common terminal C maintained at the ground potential. In one embodiment, the second terminal D of the capacitor C1 is connected to the first voltage V1, and the second terminal E of the capacitor C2 is connected to the second voltage V2. During operation, in one embodiment, V1 is greater than V2, and in a second embodiment, V2 is greater than V1.Figure 7B shows a circuit diagram 702 showing the coupling between three capacitors C1, C2, and C3. The decoupling capacitors C1, C2, and C3 depicted in the figure may include any of the aforementioned capacitors 100A, 100B, 100C, 100D, 300, and 402.In the embodiment, the terminals A, B, and C of the capacitors C1, C2, and C3 are respectively connected to the common line D further connected to the ground terminal. In one embodiment, the second terminal E of the capacitor C1 and the second terminal F of the capacitor C2 are connected to the first voltage source V1. As shown in the figure, the second terminal G of the capacitor C3 is connected to the second voltage source V2. During operation, in one embodiment, V1 is greater than V2, and in a second embodiment, V2 is greater than V1.Figure 7C shows a circuit diagram 704 showing the coupling between four capacitors C1, C2, C3, and C4. The decoupling capacitors C1, C2, C3, and C4 depicted in the figure may include any of the aforementioned capacitors 100A, 100B, 100C, 100D, 300, and 402.In the embodiment, the terminals A, B, C, and D of the capacitors C1, C2, C3, and C4 are respectively connected to the common line E further connected to the ground terminal. In one embodiment, the second terminal F of the capacitor C1 is connected to the first voltage source V1. As shown in the figure, the second terminal G of the capacitor C2, the second terminal H of the capacitor C3, and the second terminal "I" of the capacitor C4 are all connected to a common second voltage source V2. During operation, in one embodiment, V1 is greater than V2, and in a second embodiment, V2 is greater than V1.FIG. 8 shows a system 800 that includes a capacitor 402 such as that described in connection with FIG. 4B or FIG. 4C, the capacitor 402 being coupled to an access transistor 801. Referring again to FIG. 8, in an embodiment, the transistor 801 is on the substrate 802 and has a gate 803, a source region 804, and a drain region 806. In the illustrative embodiment, the isolation portion 808 is adjacent to the portion of the substrate 802, the source region 804, and the drain region 806. In some embodiments of the present disclosure, for example, as shown in the figure, a pair of sidewall spacers 810 are on opposite sides of the gate 803.The transistor 801 also includes a gate contact 816 above the gate 803 and electrically coupled to the gate 803, and a drain contact 814 above the drain region 806 and electrically coupled to the drain region 806, and in the source region The source contact 812 above 804 and electrically coupled to the source region 804, as shown in FIG. 6. The transistor 801 also includes a dielectric 818 adjacent to the gate 803, a source region 804, a drain region 806, an isolation portion 808, a sidewall spacer 810, a source contact 812, a drain contact 814, and a gate contact 816.In the illustrative embodiment, the trench capacitor 402 includes a relaxed ferroelectric layer 104 and a ferroelectric layer 110. As shown, the electrode 102 of the trench capacitor 402 is adjacent to the dielectric 828 and the dielectric 818 and is coupled to the drain contact 814. In other embodiments, the trench capacitor 402 may be at the same level as the transistor 801. One or more interconnects may be connected to the electrodes 102 and 106, where at least one of the electrodes 102 or 106 is electrically coupled with the drain contact 814.Both the gate contact 816 and the source contact 812 are coupled with the interconnect. In the illustrative embodiment, gate contact 816 is coupled with gate interconnect 824 and source contact 812 is coupled with source interconnect 826. The dielectric 828 is adjacent to the source interconnect 826 and the gate interconnect 824. In an embodiment, the system 800 also includes a power supply 830 coupled to the transistor 801.In an embodiment, the underlying substrate 802 represents the surface used to fabricate integrated circuits. Suitable substrates 802 include materials such as single crystal silicon, polycrystalline silicon, and silicon-on-insulator (SOI), as well as substrates formed of other semiconductor materials. In some embodiments, the substrate 802 is the same or substantially the same as the substrate 600 described in connection with FIG. 6A. Referring again to FIG. 8, the substrate 802 may also include semiconductor materials, metals, dielectrics, dopants, and other materials commonly found in semiconductor substrates.In an embodiment, the transistor 801 associated with the substrate 802 is a metal oxide semiconductor field effect transistor (MOSFET or simply MOS transistor) fabricated on the substrate 802. In some embodiments, the transistor 801 is an access transistor 801. In various embodiments of the present disclosure, the transistor 801 may be a planar transistor, a non-planar transistor, or a combination of both. Non-planar transistors include FinFET transistors such as double-gate transistors and tri-gate transistors, and surround or all-surround gate transistors such as nanobelt and nanowire transistors.In some embodiments, the gate 803 includes at least two layers, a gate dielectric layer 803A and a gate electrode 803B. The gate dielectric layer 803A may include one layer or a stack of layers. One or more layers may include silicon oxide, silicon dioxide (SiO2), and/or high-k dielectric materials. The high-k dielectric material may include elements such as hafnium, silicon, oxygen, titanium, tantalum, lanthanum, aluminum, zirconium, barium, strontium, yttrium, lead, scandium, niobium, and zinc. Examples of high-k materials that can be used in the gate dielectric layer include, but are not limited to, hafnium oxide, hafnium silicon oxide, lanthanum oxide, lanthanum aluminum oxide, zirconium oxide, zirconium oxide silicon, tantalum oxide, titanium oxide, barium strontium titanium, Barium titanium oxide, strontium titanium oxide, yttrium oxide, aluminum oxide, lead scandium tantalum oxide, and zinc lead niobate. In some embodiments, when a high-k material is used, an annealing process may be performed on the gate dielectric layer 803A to improve its quality.The gate electrode 803B of the access transistor 801 of the substrate 802 is formed on the gate dielectric layer 803A, and depending on whether the transistor is a PMOS or NMOS transistor, the gate electrode 803B may be made of at least one P-type work function metal or N-type work function metal. composition. In some embodiments, the gate electrode 803B may be composed of a stack of two or more metal layers, in which one or more metal layers are work function metal layers, and at least one metal layer is a conductive filling layer.For PMOS transistors, metals that can be used for the gate electrode 803B include but are not limited to ruthenium, palladium, platinum, cobalt, nickel, and conductive metal oxides, such as ruthenium oxide. The P-type metal layer will enable the formation of a PMOS gate electrode with a work function between about 4.6 eV and about 5.2 eV. For NMOS transistors, metals that can be used for the gate electrode include, but are not limited to, hafnium, zirconium, titanium, tantalum, aluminum, alloys of these metals, and carbides of these metals, such as hafnium carbide, zirconium carbide, titanium carbide, tantalum carbide, and Aluminum carbide. The N-type metal layer will enable the formation of an NMOS gate electrode with a work function between about 3.6 eV and about 4.2 eV.In some embodiments, the gate electrode may be composed of a "U"-shaped structure including a bottom portion substantially parallel to the surface of the substrate and two sidewall portions substantially perpendicular to the top surface of the substrate. In another embodiment, at least one of the metal layers forming the gate electrode 803B may simply be a planar layer substantially parallel to the top surface of the substrate, and does not include sidewall portions substantially perpendicular to the top surface of the substrate. In other embodiments of the present disclosure, the gate electrode may be composed of a combination of a U-shaped structure and a planar non-U-shaped structure. For example, the gate electrode 803B may be composed of one or more U-shaped metal layers formed on top of one or more planar non-U-shaped layers.The sidewall spacers 810 may be formed of materials such as silicon nitride, silicon oxide, silicon carbide, silicon nitride doped with carbon, and silicon oxynitride. The process for forming the sidewall spacers includes deposition and etching process operations. In alternative embodiments, multiple pairs of spacers may be used, for example, two pairs, three pairs, or four pairs of sidewall spacers may be formed on opposite sides of the gate stack. As shown in the figure, a source region 804 and a drain region 806 are formed in the substrate adjacent to the gate stack of each MOS transistor. The source region 804 and the drain region 806 are usually formed using an implantation/diffusion process or an etching/deposition process. In the previous process, dopants such as boron, aluminum, antimony, phosphorus, or arsenic may be ion-implanted into the substrate to form the source region 804 and the drain region 806. The annealing process that activates the dopants and diffuses them further into the substrate usually follows the ion implantation process. In the latter process, the substrate 802 may be etched first to form grooves at the locations of the source and drain regions. An epitaxial deposition process can then be performed to fill the recesses with the material used to fabricate the source region 804 and the drain region 806. In some embodiments, a silicon alloy such as silicon germanium or silicon carbide may be used to fabricate the source region 804 and the drain region 806. In some embodiments, the epitaxially deposited silicon alloy may be doped in situ with dopants such as boron, arsenic, or phosphorus. In other embodiments, one or more alternative semiconductor materials (eg, germanium or III-V materials or alloys) may be used to form the source region 804 and the drain region 806. And in other embodiments, one or more metal and/or metal alloy layers may be used to form the source region 804 and the drain region 806.In an embodiment, the source contact 812, the drain contact 814, and the gate contact 816 each include a multilayer stack. In an embodiment, the multilayer stack includes one or more of Ti, Ru, or Al and a conductive cap on one or more of Ti, Ta, Ru, or Al. The conductive cap may include a material such as W or Cu.In an embodiment, both the source interconnection 826 and the gate interconnection 824 include a multilayer stack. In an embodiment, the multilayer stack includes one or more of Ti, Ru, or Al and a conductive cap on one or more of Ti, Ta, Ru, or Al. The conductive cap may include a material such as W or Cu.The isolation portion 808 and the dielectrics 818 and 828 may each include any material that has sufficient dielectric strength to provide electrical isolation. The material may include one or more of oxygen, nitrogen, or carbon and silicon, such as silicon dioxide, silicon nitride, silicon oxynitride, carbon-doped nitride, or carbon-doped oxide.FIG. 9 shows a computing device 900 according to an embodiment of the present disclosure. As shown in the figure, the computing device 900 houses a motherboard 902. The motherboard 902 may include several components, including but not limited to a processor 901 and at least one communication chip 904 or 905. The processor 901 is physically and electrically coupled to the motherboard 902. In some embodiments, the communication chip 905 is also physically and electrically coupled to the motherboard 902. In another embodiment, the communication chip 905 is part of the processor 901.Depending on its application, the computing device 900 may include other components that may or may not be physically and electrically coupled to the motherboard 902. These other components include, but are not limited to, volatile memory (e.g., DRAM), non-volatile memory (e.g., ROM), flash memory, graphics processor, digital signal processor, encryption processor, chipset 906, antenna, Displays, touch screen displays, touch screen controllers, batteries, audio codecs, video codecs, power amplifiers, global positioning system (GPS) devices, compasses, accelerometers, gyroscopes, speakers, cameras, and mass storage devices (e.g. Hard disk drive, compact disc (CD), digital versatile disc (DVD), etc.).The communication chip 905 implements wireless communication for transmitting data to and from the computing device 900. The term "wireless" and its derivatives can be used to describe circuits, devices, systems, methods, technologies, communication channels, etc. that can transmit data through non-solid media through the use of modulated electromagnetic radiation. The term does not imply that the associated devices do not contain any wires, although in some embodiments they may not contain any wires. The communication chip 905 can implement any of several wireless standards or protocols, including but not limited to Wi-Fi (IEEE 802.11 series), WiMAX (IEEE 802.112 series), IEEE 802.20, Long Term Evolution (LTE), Ev-DO, HSPA+, HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth, its derivatives, and any other wireless protocols designated as 3G, 4G, 5G, and higher. The computing device 900 may include a plurality of communication chips 904 and 905. For example, the first communication chip 905 may be dedicated to short-distance wireless communication, such as Wi-Fi and Bluetooth, and the second communication chip 904 may be dedicated to long-distance wireless communication, such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, Ev -DO etc.The processor 901 of the computing device 900 includes an integrated circuit die packaged in the processor 901. In some embodiments, the integrated circuit die of the processor 901 includes one or more interconnect structures, non-volatile storage devices, and capacitors (such as trench capacitors 402 or 404 described in FIG. 4B or FIG. 4C). Coupled transistors. Referring again to FIG. 9, the term "processor" may refer to any device or part of a device that processes electronic data from a register and/or memory to convert the electronic data into other electronic data that can be stored in the register and/or memory.The communication chip 905 also includes an integrated circuit die packaged in the communication chip 905. In another embodiment, the integrated circuit dies of the communication chips 904 and 905 include one or more interconnect structures, non-volatile storage devices, capacitors (for example, trench capacitors 402 or 404 described above), and capacitors (Trench capacitor 402 or 404 mentioned above) coupled transistor. Depending on its application, the computing device 900 may include other components that may or may not be physically and electrically coupled to the motherboard 902. As shown in the figure, these other components may include, but are not limited to, volatile memory (e.g., DRAM) 909, 908, non-volatile memory (e.g., ROM) 910, graphics CPU 912, flash memory, global positioning system (GPS) ) Device 913, compass 914, chipset 906, antenna 916, power amplifier 909, touch screen controller 911, touch screen display 917, speaker 915, camera 903, and battery 919, and other components, such as digital signal processors, encryption processors , Audio codecs, video codecs, accelerometers, gyroscopes, and mass storage devices (such as hard disk drives, solid state drives (SSD), compact disks (CD), digital versatile disks (DVD), etc.). In other embodiments, any of the components housed within the computing device 900 and discussed above may include independent integrated circuit storage dies that include an array of one or more NVM devices.In various embodiments, the computing device 900 may be a laptop computer, a netbook, a notebook computer, an ultrabook, a smart phone, a tablet computer, a personal digital assistant (PDA), an ultramobile PC, a mobile phone, a desktop computer, a server , Printers, scanners, monitors, set-top boxes, entertainment control units, digital cameras, portable music players, or digital video recorders. In other embodiments, the computing device 900 may be any other electronic device that processes data.Figure 10 shows an integrated circuit (IC) structure 1000 that includes one or more embodiments of the present disclosure. The integrated circuit (IC) structure 1000 is an intermediate substrate for bridging the first substrate 1002 to the second substrate 1004. The first substrate 1002 may be, for example, an integrated circuit die. The second substrate 1004 may be, for example, a memory module, a computer motherboard, or another integrated circuit die. Generally, the purpose of the integrated circuit (IC) structure 1000 is to extend the connection to a wider pitch or to reroute the connection to a different connection. For example, the integrated circuit (IC) structure 1000 can couple the integrated circuit die to a ball grid array (BGA) 1007, which can then be coupled to the second substrate 1004. In some embodiments, the first substrate 1002 and the second substrate 1004 are attached to opposite sides of the integrated circuit (IC) structure 1000. In other embodiments, the first substrate 1002 and the second substrate 1004 are attached to the same side of the integrated circuit (IC) structure 1000. And in other embodiments, three or more substrates are interconnected by an integrated circuit (IC) structure 1000.The integrated circuit (IC) structure 1000 may be formed of epoxy resin, glass fiber reinforced epoxy resin, ceramic material, or polymer material such as polyimide. In other embodiments, the integrated circuit (IC) structure may be formed of alternative rigid or flexible materials, which may include the same materials as described above for semiconductor substrates, such as silicon, germanium, and other III-V and IV Family material.The integrated circuit (IC) structure may include metal interconnects 1008 and vias 1010. The vias 1010 include, but are not limited to, through silicon vias (TSV) 1012. The integrated circuit (IC) structure 1000 may also include embedded devices 1014, including both passive and active devices. Such embedded devices 1014 include capacitors, decoupling capacitors (for example, capacitors 100A, 100C, 100D, 100E, 300, 402, or 404 as described above), resistors, inductors, fuses, diodes, transformers, including transistors ( For example, a device structure of a transistor 801) coupled with at least one capacitor 402 as described above. The integrated circuit (IC) structure 1000 may also include embedded devices 1014, such as one or more resistive random access devices, sensors, and electrostatic discharge (ESD) devices. It is also possible to form more complex devices on the integrated circuit (IC) structure 1000, such as radio frequency (RF) devices, power amplifiers, power management devices, antennas, arrays, sensors, and MEMS devices. According to an embodiment of the present disclosure, the device or process disclosed herein may be used in the manufacture of an integrated circuit (IC) structure 1000.Therefore, one or more embodiments of the present disclosure generally relate to the manufacture of embedded microelectronic memory. Microelectronic memory can be non-volatile, where the memory can retain stored information even if it is not powered.Therefore, one or more embodiments of the present disclosure relate to capacitor devices, such as capacitors 100A, 100C, 100D, 100E, 300, 402, or 404 as described above. The capacitor 100A, 100C, 100D, 100E, 300, or 402 can be used in various integrated circuit applications.In a first example, a capacitor device includes: a first electrode having a first metal alloy or metal oxide; a ferroelectric layer adjacent to the first electrode, wherein the ferroelectric layer includes lead, barium, manganese, zirconium, Two or more of titanium, iron, bismuth, strontium, neodymium, potassium, or niobium, and oxygen; and a second electrode coupled to the ferroelectric layer, wherein the second electrode includes a second metal alloy or a second metal oxide Things.In the second example, for any of the first examples, the ferroelectric layer includes a combination of one of magnesium or zirconium and lead, niobium, and oxygen.In the third example, for any one of the first to second examples, the ferroelectric layer includes a first combination of Pb, Mg, Nb, and O and a second combination of Pb, Ti, and O, wherein the ferroelectric layer The atomic percentage of Mg and Nb is greater than the atomic percentage of Ti in the ferroelectric layer.In the fourth example, for any one of the first to third examples, the concentration of the first combination is at most 100% greater than the concentration of the second combination.In the sixth example, for any one of the first to fifth examples, the ferroelectric layer includes a first combination of Pb, Mg, Nb, and O and a second combination of Ba, Ti, and O, wherein the ferroelectric layer The atomic percentages of Pb, Mg, and Nb are greater than the atomic percentages of Ba and Ti in the ferroelectric layer.In the seventh example, for any of the first to sixth examples, the ferroelectric layer includes a combination of PbMgxNb1-xO, BaTiO3, PbTiO3, and BiFeO3.In the eighth example, for any one of the first to seventh examples, the ferroelectric layer includes a first combination of Pb, Mg, Nb, and O and a second combination of Pb, Zr, and O, wherein the ferroelectric layer The atomic percentage of Mg and Nb is greater than the atomic percentage of Zr in the ferroelectric layer.In the ninth example, for any one of the first to eighth examples, the ferroelectric layer has a thickness between 5 nm and 50 nm.In the tenth example, for any of the first to ninth examples, the ferroelectric layer includes a combination of Ba oxide, Ti oxide, and Nd oxide.In the eleventh example, for any one of the first to tenth examples, the ferroelectric layer is the first ferroelectric layer 104, and the capacitor device is further included between the first ferroelectric layer and the first electrode or the second electrode. Between the second ferroelectric layer 110.In the twelfth example, for any one of the first to eleventh examples, the second ferroelectric layer includes two of lead, barium, manganese, zirconium, titanium, iron, bismuth, neodymium, strontium or niobium or More kinds, and oxygen, and among them, the material of the first ferroelectric layer is different from the material of the second ferroelectric layer.In the thirteenth example, for any of the first to twelfth examples, the second ferroelectric layer includes hafnium, oxygen, and is doped with one of Zr, Al, Si, N, Y, or La or Many kinds.In the fourteenth example, the first ferroelectric layer has a dielectric constant between 100-2200, and the second ferroelectric layer has a dielectric constant between 20-50.In the fifteenth example, for any of the fourteenth examples, the first ferroelectric layer has a thickness between 4nm and 49nm, and the second ferroelectric layer has a thickness between 1nm and 46nm, where, The combined thickness of the first ferroelectric layer and the second ferroelectric layer is between 5 nm and 50 nm.In a sixteenth example, a capacitor device includes: a first electrode including a first metal alloy or metal oxide; and a multilayer stack adjacent to the first electrode, the multilayer stack including a double-layer stack. The two-layer stack includes a first relaxed ferroelectric layer or a first non-relaxed ferroelectric layer containing two or more of lead, barium, manganese, zirconium, titanium, iron, bismuth, strontium, neodymium, or niobium, and oxygen And one of the second relaxed ferroelectric layer or the second non-relaxed ferroelectric layer on one of the first relaxed ferroelectric layer or the first non-relaxed ferroelectric layer. The multilayer stack further includes: a third relaxor ferroelectric layer on the double stack, wherein the third relaxor ferroelectric layer includes a material that is substantially the same as that of the first ferroelectric layer; a second electrode, the The second electrode is coupled with the third relaxation ferroelectric layer and includes a second metal alloy.In the seventeenth example, for any of the sixteenth examples, the multilayer stack includes a plurality of double layers, wherein the number of the plurality of double layers is in the range of 1 to 10, wherein the material layer stack has The thickness is between 5 nm and 50 nm, and wherein the two-layer stack has a thickness between 4 nm and 49 nm, and the third relaxor ferroelectric layer includes a thickness of at least 1 nm.In the eighteenth example, for any one of the fourteenth to seventeenth examples, the material included in the third relaxed ferroelectric layer is substantially the same as that of the first ferroelectric layer.In a nineteenth example, a system includes transistors above the substrate. The transistor includes: a drain contact coupled to the drain, a source contact coupled to the source, and a gate contact coupled to the gate, wherein the gate is between the gate contact and the drain contact ; And the bottom electrode coupled to the drain contact. The system also includes a capacitor device coupled to the drain terminal of the transistor. The capacitor device includes: a first electrode with a first metal alloy or metal oxide; a ferroelectric layer adjacent to the first electrode, wherein the ferroelectric layer includes lead, barium, manganese, zirconium, titanium, iron, bismuth, strontium , Two or more of neodymium or niobium, and oxygen; and a second electrode coupled with the ferroelectric layer, wherein the second electrode includes a second metal alloy or a second metal oxide.In the twentieth example, for any of the nineteenth examples, the ferroelectric layer includes a combination of lead, magnesium, niobium, and oxygen or a combination of lead, zirconium, niobium, and oxygen, and wherein the transistor is coupled to the power source.In the twenty-first example, the system includes an integrated circuit, where the integrated circuit includes a capacitor device. The capacitor device includes: a first electrode including a first metal alloy or metal oxide; a ferroelectric layer adjacent to the first electrode, the ferroelectric layer including lead, barium, manganese, zirconium, titanium, iron, bismuth, strontium , Two or more of neodymium or niobium, and oxygen; and a second electrode coupled with the ferroelectric layer, the second electrode including a second metal alloy or a second metal oxide. The system also includes a display device coupled to the integrated circuit, the display device displaying an image based on a signal in communication with the integrated circuit.In the twenty-first example, for any of the twentieth examples, the ferroelectric layer includes a combination of lead, magnesium, niobium, and oxygen or a combination of lead, zirconium, niobium, and oxygen. |
A method and apparatus for producing buried ground planes in a silicon substrate for use in system modules is disclosed. Conductor patterns arc printed on the surface of the silicon substrate. Pores are created in the printed conductor patterns by a chemical anodization process. The pores are then filled with a conductive metal, such as tungsten, molybdenum, or copper by a selective deposition process to produce a low impedance ground buried in the substrate. |
What is claimed as new and desired to be protected by Letters Patent of the United States is: 1. A silicon interposer substrate comprising:at least one buried ground plane formed within said silicon interposer substrate, said at least one buried ground plane comprising a conductor extending to a surface of said silicon interposer substrate, said at least one conductor comprising a plurality of pores in said silicon interposer substrate filled with a refractory metal; and an insulation layer over said surface of said silicon interposer substrate and in contact with said at least one conductor. 2. The substrate according to claim 1, wherein said conductive metal is copper.3. The substrate according to claim 1, wherein said conductive metal is a refractory metal.4. The substrate according to claim 1, wherein said refractory metal is molybdenum.5. The substrate according to claim 1, wherein said refractory metal is tungsten.6. The substrate according to claim 1, wherein said refractory metal is a refractory metal silicide.7. The substrate according to claim 1, wherein said insulation layer is formed of a high temperature polymer film.8. The substrate according to claim 7, wherein said high temperature polymer film is a polyimide.9. The substrate according to claim 1, wherein said insulation layer is formed of silicon dioxide.10. A substrate comprising:at least a first buried ground plane comprising at least a first conductor extending to a first surface of said substrate and at least a second buried ground plane comprising at least a second conductor extending to a second surface of said substrate, said second surface being opposite said first surface, each of said at least first and second conductors comprising a plurality of pores in said substrate filled with a conductive metal; and an insulation layer over at least one of said first surface and second surface of said substrate and in contact with at least one of said first and second conductors. 11. The substrate according to claim 10, wherein said conductive metal is copper.12. The substrate according to claim 10, wherein said conductive metal is a refractory metal.13. The substrate according to claim 12, wherein said refractory metal is molybdenum.14. The substrate according to claim 12, wherein said refractory metal is tungsten.15. The substrate according to claim 10, wherein said conductive metal is a metal silicide.16. The substrate according to claim 10, wherein said insulation layer is formed of a high temperature polymer film.17. The substrate according to claim 16, wherein said high temperature polymer film is a polyimide.18. The substrate according to claim 10, wherein said insulation layer is formed of silicon dioxide.19. A system module comprising:a silicon interposer substrate having at least one buried ground plane formed within said silicon interposer substrate, said at least one ground plane comprising at least one conductor extending to a surface of said silicon interposer substrate; said at least one conductor comprising a plurality of pores in said silicon interposer substrate filled with a conductive metal; an insulation layer over said surface of said silicon interposer substrate and in contact with said at least one conductor; a first chip mounted on said surface of said substrate; and a second chip mounted on said first chip. 20. The system module according to claim 19, wherein said conductive metal is copper.21. The system module according to claim 19, wherein said conductive metal is a refractory metal.22. The system module according to claim 21, wherein said refractory metal is molybdenum.23. The system module according to claim 21, wherein said refractory metal is tungsten.24. The system module according to claim 19, wherein said conductive metal is a metal silicide.25. The system module according to claim 19, further comprising:an insulation layer over said surface of said substrate. 26. The system module according to claim 25, wherein said insulation layer is formed of a high temperature polymer film.27. The system module according to claim 26, wherein said high temperature polymer film is a polyimide.28. The system module according to claim 25, wherein said insulation layer is formed of silicon dioxide.29. The system module according to claim 19, wherein one of said first and second chip includes analog circuits and the other of said first and second chip includes digital circuits.30. A system module comprising:at least a first buried ground plane comprising a substrate having at least one buried conductor extending to a first and second surface of said substrate, said second surface being opposite said first surface, said at least one conductor comprising a plurality of pores in said substrate filled with a conductive metal; an insulation layer over at least one of said first and second surface of said substrate and in contact with said at least one conductor; a first chip mounted on said surface of said substrate; and a second chip mounted on said first chip. 31. The system module according to claim 30, wherein said conductive metal is copper.32. The system module according to claim 30, wherein said conductive metal is a refractory metal.33. The system module according to claim 32, wherein said refractory metal is molybdenum.34. The system module according to claim 32, wherein said refractory metal is tungsten.35. The system module according to claim 30, wherein said conducive metal is a metal silicide.36. The system module according to claim 30, further comprising:an insulation layer over said first and said second surface of said substrate. 37. The system module according to claim 36, wherein said insulation layer is formed of a high temperature polymer film.38. The system module according to claim 37, wherein said high temperature polymer film is a polyimide.39. The system module according to claim 36, wherein said insulation layer is formed of silicon dioxide.40. The system module according to claim 30, wherein one of said first and second chip includes analog circuits and the other of said first and second clip includes digital circuits. |
This application is a divisional application of U.S. patent application Ser. No. 09/199,442 filed Nov. 25, 1998, the entirety of which is incorporated herein by reference.BACKGROUND OF THE INVENTION1. Field of the InventionThe present invention relates generally to semiconductor circuits, and more particularly to substrates having buried ground planes and their method of processing.2. Description of the Related ArtAs improved technology is developed, the size of semiconductor components, and correspondingly the size of end-product equipment in which they are used, continues to decrease. This has led to the concept of a "system on a chip." This concept of a "system on a chip" has been around since the very large scale integration (VLSI) era. As integrated circuit technology enters the ultra large scale integration (ULSI) era, the desire for a "system on a chip" is increasing.The concept of a system on a chip refers ideally to a computing system in which all the necessary integrated circuits are fabricated on a single wafer or substrate, as compared with today's method of fabricating many chips of different functions, i.e., logic and memory, and connecting them to assemble a system. There are problems, however, with the implementation of a truly high performance system on a chip because of vastly different fabrication processes and different manufacturing yields for the logic and memory circuits. To overcome some of these problems, a "system module" has been developed. A system module may consist of two chips, i.e., a logic chip and a memory chip, with one stacked on the other in a structure called Chip-on-Chip (COC) using a micro bump bonding (MBB) technology. The resulting dual-chip structure is mounted on a silicon substrate. Additional components and chips may also be mounted on the silicon substrate.The multiple chips mounted on the single substrate in a system module typically include different circuits, i.e., some analog circuits and some digital circuits. This requires a low impedance ground in the system module to suppress digital noise that may appear in the analog circuits of these mixed mode circuits. Digital noise is the side effect of the switching of the logic circuits. High-speed synchronous digital integrated circuits require large switching currents which can induce noise on the power distribution networks and ground busses due to the finite resistance and inductance in these circuits. The noise may consist of voltage spikes appealing at the power supply terminals of the chip with the switching activity. Power supply noise can have a significant effect due to simultaneous switching noise in CMOS integrated circuits. These problems are more severe in mixed-mode circuits and require careful design of the power distribution systems.Thus, a silicon substrate with a low impedance built-in ground plane is necessary for the system modules to suppress noise. It is also desirable for a built-in ground plane to be planar with the surface of the substrate to maintain a flat surface on the substrate upon which various chips, active circuits, and passive components (such as decoupling capacitors and termination resistors) can be subsequently mounted. A conventional method for forming buried conductors in a substrate is the use of heavy ion implantation of conducting atoms into the substrate to form the conductor. This approach, however, is not economically viable due to the required high-current, high-energy implanters, and may also cause damage to the overlying substrate. Another conventional method for fabricating a multilevel interconnect is to implant silicon into silicon oxide followed by a selective deposition of tungsten to build a multi-layer structure with low electrical resistivity. This method, however, is suitable only for fabricating a buried conductor in a silicon oxide, and not in a silicon substrate as is required in a system module.Thus, there exists a need for an apparatus and method for simply and inexpensively fabricating a buried ground plane in a silicon substrate for rise in mulltichip system modules.SUMMARY OF THE INVENTIONThe present invention provides a simple and low-cost scheme for producing a buried ground plane in a silicon substrate. In accordance with the present invention, the desired conductors are patterned by ordinary lithography on the surface of the silicon substrate in a mesh pattern to leave room for other chips and components to be mounted. A porous structure is produced only in the patterned conductors by depositing silicon nitride windows on the silicon and subjecting the wafer to a chemical anodization process. After the formation of the pores, the pores are then filled with a conductive metal by the use of a selective deposition technique. The filled pores may be subjected to a high-temperature annealing process to convert the deposited conductive metal to a metal silicide.These and other advantages and features of the invention will become apparent from the following detailed description of the invention which is provided in connection with the accompanying drawings.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 illustrates a cross-sectional view of a portion of a silicon substrate with buried ground planes;FIG. 2 illustrates a top view of a portion of the silicon substrate of FIG. 1 with buried ground planes;FIGS. 3A, 3B and 3C illustrate a cross-sectional view of the wafer of FIG. 2 during intermediate processes in accordance with the method of the present invention;FIG. 4 illustrates a cross-sectional view of a processed wafer with buried ground planes according to a first embodiment of the present invention;FIG. 5 illustrates a cross-sectional view of a processed wafer according to a second embodiment of the present invention;FIGS. 6A and 6B illustrate in flow chart form the steps for forming a buried ground plane in accordance with a first and second method of the present invention; andFIG. 7 illustrates in system module in accordance with the present invention.DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTSThe present invention will be described as set forth in the preferred embodiment illustrated in FIGS. 1-7. Other embodiments may be utilized mid structural or logical changes may be made without departing from tie spirit or scope of the present invention.The terms "wafer" and "substrate" are used interchangeably and arc to be understood as including silicon-on-insulator (SOI) or silicon-on-sapphire (SOS) technology, doped and undoped semiconductors, epitaxial layers of silicon supported by a base semiconductor foundation, and other semiconductor structures. Furthermore, when reference is made to a "wafer" or "substrate" in the following description, previous process steps may have been utilized to form regions or junctions in the base semiconductor structure or foundation.A cross-sectional diagram of a portion of a wafer 10 having buried ground planes is illustrated generally in FIG. 1. The term buried as used herein refers both to covered conductors, i.e., under the surface and concealed from view, and to conductors that are formed in the substrate whose top surface is planar with the surface of the substrate. Wafer 10 consists of a first level top conductor 12 and a first level bottom conductor 14. A silicon interposer 16 is provided between the conductors 12, 14. Via holes 20, 22 arc provided through the wafer 10 to provide for interconnection between the top and bottom surfaces of wafer 10. A layer 26 of insulating material, such as silicon dioxide (SiO2), may be provided between top conductor 12 and silicon interposer 16. Similarly, a layer of insulating material 28, such as silicon dioxide (SiO2), may be provided between bottom conductor 14 and silicon interposer 16. Buried ground planes 30 are provided within the silicon interposer 16. The buried ground planes 30 provide a low-inpedance ground connection suitable for suppressing noise produced by digital circuits that may be mounted on silicon interposer 16.FIG. 2 illustrates a top view of a portion of the silicon substrate 10 of FIG. 1 across the line a-a'. Space 52 is left between conductors 30 for mounting or integrating chips and/or components. Ground planes 30 may be formed in a mesh pattern to allow for areas suds as space 52.The processes for forming buried ground plane conductors in a silicon substrate in accordance with the present invention is as follows. The pattern for the desired conductors, such as conductors 30 of FIG. 2, is printed on the surface of silicon substrate 16 by conventional lithography or any other method for printing a pattern on the surface of substrate 16 as is known in the art. Space 52 may be left between conductors 30 for other chips and components to be mounted.Once the pattern for conductors 30 has been printed on the surface of substrate 16, a protective layer 72, such as for example silicon nitride, is deposited to form windows on the surface of the silicon substrate 16 in which only the printed pattern areas for conductors 30 are left exposed. FIG. 3A illustrates a cross-sectional diagram of silicon substrate 16 along line b-b' in FIG. 2 after the deposition of the silicon nitride windows. The deposition of a layer 72 of silicon nitride covers the surface of substrate 16, except for the areas where the pattern for conductors 30 has been printed. The preferable thickness for the layer 72 of silicon nitride is approximately 100 nm.The substrate 16 is then subjected to a chemical anodization process, as is well known in the art, to form a porous layer in the areas not covered with the layer 72 of silicon nitirde, i.e., the areas where the pattern for conductors 30 has been printed. Porous silicon may be formed by the anodization of silicon in aqueous solutions of hydroflouric acid. Pores are etched into the silicon during the anodization process. The resulting structure is illustrated in FIG. 3B. The areas of silicon substrate 16 which have been printed with the pattern for conductors 30 contain a porous layer 74 in substrate 16 created by the anodization process. Those portions of the substrate 16 covered with layer 72 of silicon nitride do not have a porous layer. It is well known that under appropriate anodic conditions, silicon can be converted into a highly porous material. The porosity may be controlled in the 30%-85% range, i.e., the pores in the silicon can comprise approximately 30 to 85% of the total volume within the silicon substrate 16. Thus, a porosity of 50% indicates a material in which half of its volume is comprised of pores within the material.After the formation of the pores in the substrate 16, a conductor 30 is formed by filling the pores with a conductive metal 76. Since the conductors 30 usually must be able to withstand subsequent high temperature processing, it is preferable to use refractory metals to form the conductors, such as tungsten (W) or molybdenum (Mo) or their silicides. Refractory metals are difficult to pattern by chemical mechanical polishing or standard photolithographic techniques since they are chemically inert. As such, it is desirable to have a self-aligned process that does not require any patterning of the metal. The use of a selective deposition technique, as is known in the art, provides a self aligned process that does not require any patterning of the mental. For example, if tungsten is used, the chemical vapor deposition (CVD) may be based on tungsten hexaflouride (WF6). The chemical-vapor-deposited tungsten process using either WF6/H2 or WF6/SiH4 chemistiy is well known in the art. The tungsten hexaflouride will react with the areas of the exposed substrate 16, but not with the areas of substrate 16 covered with layer 72 of silicon nitride. This will selectively deposit the tungsten in the porous layers 74 in silicon substrate 16 in the areas where the pattern for conductors 30 are printed but not on the areas covered with the silicon nitride layer 72. Molybdenum can be deposited in a similar fashion as tungsten. The deposition of the conductive metal in the pores creates a low impedance buried conductive plane.FIG. 3C illustrates the silicon substrate 16 after the conductive metal 76, such as tungsten or molybdenum, has been deposited as described above. The chemical vapor deposition of the metal fills in the pores in the areas where the conductors 30 have been patterned, but does not react with the layer 72 of silicon nitride.As an alternative to the above, copper may be used as the conductive metal for applications which will not require a subsequent high processing temperature. The copper may be deposited into the pores of the areas where conductors 30 have been patterned by a chemical vapor deposition technique similar to that as described above with respect to the chemical vapor deposition of tungsten or molybdenum.After the deposition of the conducting metal 76, the excess metal and nitride windows may be removed by a chemical mechanical polishing process as is known in the art. As illustrated in FIG. 4, an insulation layer 80, formed of silicon dioxide (SiO2), or alternatively, a high-temperature polymer film with a low dielectric constant, such as for example polyimide, may be deposited.Alternatively, instead of depositing an insulating layer 80, the substrate 16 can be further processed to fabricate the conductors 30 with a refractory metal silicide. This may be preferable for applications in which a very high subsequent processing temperature will be required. After the pores have been filled with the refractory metal, the substrate 16 may be subjected to a high temperature annealing to convert the metal in the pores to a silicide. Preferable parameters for this annealing process for tungsten are a temperature greater than approximately 900[deg.] C. for up to 30 minutes. This annealing step may be combined with other processes at a later stage if desired. The resulting tungsten silicide has a typical resistivity of 18-20 micro-cm-cm, and can be subjected to subsequent high temperature processing. As illustrated in FIG. 5, the nitride window and excess metal in the channel area may be removed by chemical mechanical polishing to provide a flat and smooth surface for subsequent processing. An insulation layer (not shown), formed of silicon dioxide or a high-temperature polymer film such as polyimide, may be deposited over the surface of substrate 16.A first and second method of processing buried ground planes in accordance with the present invention is illustrated in flow chart form in FIGS. 6A and 6B. Like steps are referred to by like numerals in each method.In accordance with a first method of the present invention as illustrated in FIG. 6A, in step 610, the pattern for conductors 30 is printed on the surface of the silicon wafer 16 by conventional lithography or any other method as is known in the art. The pattern for conductors 30 may be in a mesh pattern as illustrated in FIG. 2 to allow sufficient space between conductors 30 for other chips and components to be mounted.In step 620, a layer of silicon nitride, preferably 100 nm thick, is deposited on the surface of the substrate 16 to form silicon nitride windows. The layer of silicon nitride does not cover the areas where the pattern for conductors 30 has been printed. In step 630, the wafer is subjected to a chemical anodization process, as is known in the art, to produce a porous layer in the substrate 16 in the areas where the pattern for conductors 30 has been printed.In step 640, a conductor is formed by depositing a conductive metal into the pores of the porous layer produced in the substrate. As noted previously, refractory metals such as tungsten (W) or molybdenum (Mo) are preferable for applications in which the substrate 16 must be able to withstand subsequent high temperature processing. The metal may be deposited using a selective deposition technique as is known in the art. For applications in which the substrate 16 will not be subjected to subsequent high processing temperatures, copper may be used as the conducting metal. The copper may be deposited by a chemical vapor deposition as is known in the art.In step 650, the excess metal and nitride windows may be removed utilizing a chemical mechanical polishing process as is known in the art or any other method. In step 660, an insulation layer 80, formed of silicon dioxide, or alternatively a high-temperature polymer film with a low dielectric constant, such as polyimide, may be deposited on the surface of the substrate 16.A second method for producing a buried ground plane in accordance with the present invention is illustrated in FIG. 6B. Steps 610, 620, and 630 are identical to those of FIG. 6A and the description will not be repeated here. After the pores have been created in the substrate 16 in step 630, the conductor is created by depositing a refractory metal into the pores in step 740 using a selective deposition technique as is known in the art. In step 750, the substrate 16 is subjected to a high temperature annealing to convert the metal in the pores to a silicide. Preferable parameters for this annealing process for tungsten are a temperature greater than approximately 900[deg.] C. for up to 30 minutes. This annealing step may be combined with other processes at a later stage if desired. In step 760, the nitride window and excess metal in the channel area may be removed by chemical mechanical polishing to provide a flat and smooth surface for subsequent processing. In step 770, an insulation layer, formed of silicon dioxide, or alternatively a high-temperature polymer film with a low dielectric constant, such as polyimide, may be deposited on the surface of the substrate 16.In accordance with the present invention, a buried ground plane can be formed in a silicon substrate simply and inexpensively, without damaging the surrounding environment within the substrate.FIG. 7 illustrates a portion of a system module having buried ground planes constructed in accordance with the present invention. A high performance system module may be provided with a silicon interposer, such as substrate 102, onto which semiconductor chips or active or passive components can be easily mounted. For example, a first chip 104 may be stacked on a second chip 100 using MBB technology as is known in the art to result in a chip-on-clip module. The resulting chip-on-chip module structure may be mounted onto substrate 102 along with additional active or passive components. Substrate 102 may have a plurality of such chip-on-chip module structures and components mounted on its surface. Each of the system modules may consist of at least two chips, some of which may be analog circuits and others digital circuits. Substrate 102, in accordance with the present invention, may be provided with buried low impedance ground conductors 106 to suppress digital noise in the analog circuits of the modules.While preferred embodiments of the invention have been described and illustrated above, it should be understood that these are exemplary of the invention and are not to be considered as limiting. Additions, deletions, substitutions, and other modifications can be made without departing from the spirit or scope of the present invention. Accordingly, the invention is not to be considered as limited by the foregoing description but is only limited by the scope of the appended claims. |
A method and apparatus of compressing addresses for transmission includes receiving a transaction at a first device from a source that includes a memory address request for a memory location on a second device. It is determined if a first part of the memory address is stored in a cache located on the first device. If the first part of the memory address is not stored in the cache, the first part of the memory address is stored in the cache and the entire memory address and information relating to the storage of the first part is transmitted to the second device. If the first part of the memoryaddress is stored in the cache, only a second part of the memory address and an identifier that indicates a way in which the first part of the address is stored in the cache is transmitted to the second device. |
1.A method for compressing an address for transmission from a first device to a second device via a link, which includes:Receiving at the first device from a source a transaction that includes a request for a memory address for a memory location on the second device, wherein the memory address includes a first part and a second part;Determining whether the first part of the memory address is stored in a cache located on the first device; andIf the first part of the memory address is not stored in the cache located on the first device, store the first part of the memory address in the cache of the first device , And transmit the entire memory address and information related to the storage of the first part to the second device, andIf the first part of the memory address is stored in the cache located on the first device, only the second part of the memory address and the first part indicating the memory address The identifier of the manner stored in the cache of the first device is transmitted to the second device.2.The method according to claim 1, further comprising when the second device receives the information related to the storage of the first part, sending the information related to the storage of the first part The information is stored in a cache located on the second device.3.The method according to claim 2, further comprising storing the second part of the memory address and the first part indicating the memory address in the first part of the first device after the second device receives the second part of the memory address. When the identifier of the method in the cache is:Retrieving the first portion of the memory address from the cache located on the second device based on the identifier; andRebuild the entire memory address.4.The method of claim 3, wherein the rebuilding includes attaching the first part to the second part.5.The method according to claim 1, wherein the first part is a tag containing higher bits of the entire memory address.6.The method of claim 5, further comprising storing the tag in a location in a table in the cache on the first device, the location being related to the source that generated the transaction Associated.7.The method of claim 6, further comprising storing the tag in the table associated with the manner in which the tag is stored.8.The method according to claim 7, wherein the tag is stored in a row in the table, the row is identified by an index indicating where the tag is stored in the table, and wherein the tag is stored in In a column associated with the manner in which the tag is stored.9.The method of claim 8, wherein one or more index rows are associated with a specific source.10.The method of claim 9, wherein the source comprises any one of the following: a processor or an input/output (I/O) device of the first device.11.The method according to claim 10, wherein the index row is associated with a plurality of mode columns.12.The method according to claim 11, wherein the transaction type includes one or more of the following: program execution thread, read request, or write request.13.The method of claim 10, wherein the specific row index associated with the I/O device is associated with one or more of the following transactions: a read request or a write request.14.A device including:The first link controller; andThe first cache, which is operatively connected to the first link controller, whereinThe first link controller:Receiving a transaction from a source, the transaction including a request for a memory address for a memory location on a second device, and wherein the memory address includes a first part and a second part,Determine whether the first part of the memory address is stored in the first cache, andIf the first part of the memory address is not stored in the first cache, the first part of the memory address is stored in the first cache, and the entire memory address is summed with all The information related to the storage of the first part is transmitted to the second device, andIf the first part of the memory address is stored in the first cache, only the second part of the memory address and the first part indicating the memory address are stored in the first cache. The identifier of the mode in the cache is transmitted to the second device.15.The device according to claim 14, wherein the first part is a tag containing higher bits of the entire memory address.16.The apparatus of claim 15, further comprising storing the tag in a location in a table in the first cache, the location being associated with the source that generated the transaction.17.The device of claim 16, further comprising storing the tag in the table associated with the manner in which the tag is stored.18.The device according to claim 17, wherein the tag is stored in a row in the table, the row is identified by an index indicating where the tag is stored in the table, and wherein the tag is stored in In the column associated with the way the tag is stored.19.The device of claim 18, wherein one or more index rows are associated with a specific source.20.The apparatus of claim 19, wherein the index row is associated with a plurality of mode columns.21.The device of claim 14, further comprising a processor.22.The device of claim 21, wherein the processor is the source that generated the transaction.23.The device of claim 22, wherein the transaction type includes one or more of the following: program execution thread, read request, or write request.24.The apparatus of claim 14, further comprising an input/output (I/O) device.25.The apparatus of claim 24, wherein the I/O device is the source that generated the transaction.26.The apparatus of claim 25, wherein the specific row index associated with the I/O device is associated with one or more of the following transactions: a read request or a write request.27.The device according to claim 14, wherein when the information related to the storage of the first part of the memory address is received through the second link controller from the second device, the second link controller will communicate with The information related to the storage of the first part is stored in a second cache of the second device.28.The apparatus according to claim 27, wherein the second link controller stores the second part of the memory address and the first part indicating the memory address in the first cache after receiving When the identifier in the manner described in:Retrieving the first portion of the memory address from the located second cache based on the identifier; andRebuild the entire memory address.29.The apparatus of claim 28, wherein the rebuilding includes attaching the first part to the second part.30.A non-transitory computer-readable medium on which instructions are recorded, which when executed by a computing device cause the computing device to perform operations including the following:Receiving at the computing device from a source a transaction that includes a request for a memory address for a memory location on a second device, wherein the memory address includes a first part and a second part;Determining whether the first part of the memory address is stored in a cache located on the computing device; andIf the first part of the memory address is not stored in the cache located on the computing device, storing the first part of the memory address in the cache of the computing device, And transmit the entire memory address and information related to the storage of the first part to the second device, andIf the first part of the memory address is stored in the cache located on the computing device, only the second part of the memory address and the first part indicating the memory address are stored The identifier of the manner in the cache of the computing device is transmitted to the second device.31.The non-transitory computer-readable medium according to claim 30, further comprising: when the second device receives the information related to the storage of the first part, communicating with all of the first part The information related to the storage is stored in a cache located on the second device.32.The non-transitory computer-readable medium of claim 31, further comprising storing the second part of the memory address and the first part indicating the memory address in the second device after receiving the second part of the memory address. When the identifier of the mode in the cache of the computing device:Retrieving the first portion of the memory address from the cache located on the second device based on the identifier; andRebuild the entire memory address.33.The non-transitory computer-readable medium of claim 32, wherein the reconstruction includes attaching the first part to the second part.34.The non-transitory computer readable medium of claim 30, wherein the first part is a tag containing higher bits of the entire memory address.35.The non-transitory computer-readable medium of claim 34, further comprising storing the tag in a location in a table located in the cache on the computing device, the location being the same as generating the The source of the transaction is associated.36.The non-transitory computer readable medium of claim 35, further comprising storing the tag in the table associated with the manner in which the tag is stored.37.The non-transitory computer-readable medium of claim 36, wherein the tag is stored in a row in the table, the row is identified by an index indicating where the tag is stored in the table, and Wherein the tag is stored in a column associated with the manner in which the tag is stored.38.The non-transitory computer readable medium of claim 37, wherein one or more index lines are associated with a specific source.39.The non-transitory computer-readable medium of claim 38, wherein the source comprises any one of the following: a processor or an input/output (I/O) device of the computing device.40.The non-transitory computer readable medium of claim 39, wherein the index row is associated with a plurality of mode columns.41.The non-transitory computer readable medium of claim 40, wherein the transaction type includes one or more of the following: program execution thread, read request, or write request.42.The non-transitory computer readable medium of claim 39, wherein a specific row index associated with the I/O device is associated with one or more of the following transactions: a read request or a write request.43.A system including:A first device including a first link controller, a first cache, a first processor, and a first input/output (I/O) device; andThe second device includes a second link controller, a second cache, a second processor, and a second I/O device, whereinThe first link controller:Receiving a transaction including a request for a memory address for a memory location on the second device from the first processor or the first I/O device, and wherein the memory address includes a first part and a second part,Determine whether the first part of the memory address is stored in the first cache, andIf the first part of the memory address is not stored in the first cache, the first part of the memory address is stored in the first cache, and the entire memory address is summed with all The information related to the storage of the first part is transmitted to the second device, andIf the first part of the memory address is stored in the first cache, only the second part of the memory address and the first part indicating the memory address are stored in the first cache. The identifier of the mode in the cache is transmitted to the second device, andWherein the second link controller:When the information related to the storage of the first part is received, the information related to the storage of the first part is stored in the second cache, orUpon receiving the second part of the memory address and the identifier indicating the manner in which the first part of the memory address is stored in the first cache:Retrieving the first portion of the memory address from the second cache located on the second device based on the identifier; andRebuild the entire memory address.44.The system of claim 43, wherein the rebuilding includes attaching the first part to the second part. |
Method and device for compressing addressCross-references to related applicationsThis application claims the rights of U.S. Provisional Application No. 62/376,096 filed on August 17, 2016 and U.S. Application No. 15/345,639 filed on November 8, 2016, the contents of which are incorporated as if they were cited in full. Into this article.Background techniqueThe links between chips (for example, processors) transmit control information and data through the same set of lines. For example, on a global memory interconnect (GMI) link, each link data packet transmitted is 128B wide. A typical request transmission via a link includes a "request" command, a "response" command, and an "acknowledgement" (ACK) command to complete the transaction. These three commands are control packets and are considered overhead. The typical cache line in the system is 64B. Therefore, in order to transmit 64B data through the link, 4 link data packets and another 3 link data packets need to be used to transmit the command data packet.Brief description of the drawingsA more detailed understanding can be obtained from the following description given by way of example in conjunction with the accompanying drawings.Figure 1 is a block diagram of an example device in which one or more of the disclosed examples may be implemented;Figure 2 is a block diagram of an example multi-die system;Figure 3 is a flowchart of an example method for compressing addresses; andFigure 4 is an example table of indexes and methods.detailed descriptionSince the available bandwidth and overhead for transmitting control information and data on the same set of lines are low, the link bandwidth between processor dies is a high-quality resource. In order to save link (for example, GMI/GOP/HT/PCIe) bandwidth, the address stream that exhibits a high degree of locality can be compressed. Although the method is described in more detail herein, the sending link controller (e.g., link interface module) maintains the last highest address bit transmitted per request stream, which in the context described herein refers to a specific processor (e.g., , Central Processing Unit (CPU)), thread or input/output (I/O) stream. When the higher bits of the subsequent request address match the higher bits in the saved last request from the same stream, the request packet is marked as a compressed address, and the higher bits are not included in the packet's GMI data packet. In response to the request to accept the compressed address, the receiving link controller then regenerates the complete requested address by retrieving a locally held copy of the last requested higher address bits of the same stream.For example, this article discloses a method for compressing addresses. The method includes receiving, at a first device, a transaction including a memory address request for a memory location on a second device from a source, where the memory address includes a first part and a second part. It is determined whether the first part of the memory address is stored in a cache located on the first device. If the first part of the memory address is not stored in the cache located on the first device, the first part of the memory address is stored in the cache of the first device, and the entire uncompressed memory address is combined with the storage of the first part. The relevant information is transmitted to the second device. If the first part of the memory address is stored in a cache located on the first device, the compressed memory address that includes only the second part of the memory address and the first part indicating the address are stored in the cache of the first device The method identifier is transmitted to the second device.Figure 1 is a block diagram of an example device 100 in which one or more of the disclosed embodiments may be implemented. The device 100 may include, for example, a computer, a game device, a handheld device, a set-top box, a television, a mobile phone, or a tablet computer. The device 100 includes a processor 102, a memory 104, a storage device 106, one or more input devices 108, and one or more output devices 110. The device 100 may also optionally include an input driver 112 and an output driver 114.The processor 102 may include a central processing unit (CPU), a graphics processing unit (GPU), the CPU and the GPU are located on the same die, or include one or more processor cores, each of which may be a CPU or a GPU . The memory 104 may be located on the same die as the processor 102, or may be provided separately from the processor 102. The memory 104 may include volatile or non-volatile memory, such as random access memory (RAM), dynamic RAM (DRAM), or cache.The storage device 106 may include a fixed or removable storage device, such as a hard disk drive, a solid state drive, an optical disc, or a flash drive. The input device 108 may include a keyboard, a keypad, a touch screen, a touch pad, a detector, a microphone, an accelerometer, a gyroscope, a biological scanner, or a network connection (for example, a wireless LAN card for transmitting and/or receiving wireless IEEE 802 signals) ). The output device 110 may include a display, a speaker, a printer, a tactile feedback device, one or more lights, an antenna, or a network connection (for example, a wireless LAN card for transmitting and/or receiving wireless IEEE 802 signals).The input driver 112 communicates with the processor 102 and the input device 108 and allows the processor 102 to receive input from the input device 108. The output driver 114 communicates with the processor 102 and the output device 110 and allows the processor 102 to send output to the output device 110. It should be noted that the input driver 112 and the output driver 114 are optional components, and in the absence of the input driver 112 and the output driver 114, the device 100 will operate in the same manner.FIG. 2 is a block diagram of an example of a multi-die device 200. The die device 200 includes one or more dies 210 (e.g., designated as die 1 2101, die 2 2102, and die 3 2103). Each die 210 includes a link controller 211, a link address cache 212, an I/O device 213, and a processor 214. The cache 212 may be substantially similar to the memory 104 described above in FIG. 1. The I/O device may include the elements 108, 110, 112, and 114 of FIG. 1 described above, and the processor 214 may be substantially similar to the processor 102 described in FIG. 1 above. The link controller 211 controls the communication between the dies 210. For example, as shown in Figure 2, die 2 2102 communicates with die 1 2101 and die 3 2103. Therefore, the link controller 211 for each pair of the respective dies 210 controls the communication link between the two dies. The link controller 211 in each die 210 communicates with the cache 212, the I/O device 213, and the processor 214 in the same die 210 to help perform the method of compressing addresses described below.FIG. 3 is a flowchart of an example method 300 for compressing addresses. In step 310, the source generates a transaction, where the transaction includes a memory address location stored on a die 210 that is different from the source that generated the transaction. For example, referring back to FIG. 2, a thread running on the processor 214 of die 1 2101 generates a transaction that includes a memory address in the memory located on die 2 2102. That is, the thread running on the processor 214 of the die 1 2101 generates a read or write to the DRAM located on the die 2 2102, or the code that needs to obtain the address located on the die 2 2102 during execution. Alternatively, the I/O device 213 of die 12101 generates addresses on die 2 2102 related to input/output transactions (e.g., read or write). There may also be cases where the address is on die 3 2103 or on any other additional die that exists.Once a transaction is generated (for example, by a source on die 1 2101), the link controller 211 of die 1 2101 forms an index and compares the tag portion in the local cache 212 for matching (step 320). The tag part refers to the higher bits of the memory address in the generated transaction, and is 28 bits wide in an uncompressed 128-bit wide memory address, for example.For example, an index is formed based on information that uniquely identifies each stream. This includes information identifying which CPU, core, thread, or I/O device generated the address. In addition, certain combinations of address bits, virtual channel indicators, read/write indicators, or other information can be used to map transactions from specific sources (e.g., threads or I/O streams) to specific indexes in the cache. Therefore, each stream is mapped to a specific index, so that irrelevant streams will not continuously replace addresses in the cache and reduce efficiency.Sometimes, the same device such as a CPU or a device that generates threads, for example, generates multiple address streams. The generated transactions may include interleaved transactions that read the memory via one or more address streams, write to one or more different areas of the memory via different address streams, and use another address stream to obtain code. There is no hard limit on the number of streams used in the process of generating streams.Therefore, each cache index includes multiple association methods (for example, 4 in this case), which allows the cache to include the last 4 different address streams (ie, higher bits). The I/O device can interleave more address streams. Therefore, each I/O source allocates multiple indexes in order to allocate the address stream across more indexes to prevent useful addresses from being overwritten prematurely. In this case, some address bits are used to map related addresses to different entries. In addition, because read and write are independent address streams, read and write are mapped to different entries. It should be noted that the receiving link controller needs to have the same available information and use the same algorithm to generate a cache index for a specific transaction so that it can find or read an entry for storing the new address when decompressing the packet. The entry for the address. In step 330, it is determined whether the tag matches the tag stored in the local cache. For example, the link controller 211 of die 1 2101 checks whether the tag matches the tag stored in the cache memory 212 of die 1 2101. If there is no match in step 330, the tag is stored in a specific manner in the local cache memory, and an uncompressed data packet including the entire address and information related to which method should be used is transmitted to the receiving link controller (step 340).Therefore, the address information is stored in a cache-specific manner instructed by the sender. For example, select any method that does not include an address (e.g., not marked as "valid"). If all methods are valid, one is selected (e.g., randomly). In an alternative example, the least recently used entry (allocated or used to compress data packets) is tracked and replaced. For example, if the link controller 211 of die 1 2101 receives an address that includes a tag that does not match the tag stored in the cache memory 212 of die 12101 (ie, the local cache memory of die 1 2101) Transaction, the link controller 211 of the die 1 2101 stores the index of the indicating tag and the storage mode of the tag in the table in the cache memory 212 of the die 1 2101. The mode indicates the method of storing the tag ("mode"), and may indicate the type of transaction from the source that generated the address (for example, fetching an instruction from the processor, read or write request).Figure 4 is an example table 400 of indexes and ways that can be stored in the cache memory. Table 400 includes multiple indexes 410 (designated 4100, 4101, 4102, ..., 41039, and 41040) corresponding to rows 0-40. Although 40 indexes are described, the table can include more or fewer indexes as needed. In addition, four modes 420 are indicated in the table 400 (designated as 4201, ..., 4204). Also, it should be noted that more or fewer ways can be stored as needed. The table 400 can also be further divided according to the source of the transaction. For example, as shown in Figure 4, rows 0-31 correspond to CPU transactions generated by threads running on the CPU. Rows 32-35 correspond to I/O read transactions, while rows 36-40 correspond to I/O write transactions. The tags are stored in cells corresponding to the index row (i.e., 410) and the mode column (i.e., 420).Therefore, referring again to step 340, if the link controller 211 of the die 1 2101 does not find a match to the tag in the local storage in step 330, an index is formed and the tag is stored in the table 400. For example, if a transaction is received from a thread running on the processor 214, the tag may be stored in index 0, mode 0 (ie, row 4100, column 4201). It should be noted that if the cache memory is full, the index needs to be deleted first before storing the new index. Therefore, if there is a situation where the table 400 is full, the link controller 211 of the die 1 2101 selects an index to delete, and stores the newly generated address label and mode in the index row. For example, select any method among the indexes to be replaced, but each index is bound to a specific source through an index generation algorithm, so that a specific transaction is mapped to an index, and any transaction mapped to the index can only access the index.Alternatively, the sending link controller tracks the index associated with a specific transaction source, and deletes the index stored for that source. For example, if a transaction is received from the processor thread "5", the link controller on die 1 2101 checks which indexes are associated with thread 5, deletes one of these indexes, and stores the information formed for the new transaction New index. As another alternative, identify the least recently used address and delete the index.Once the tag is stored, the link controller 211 of die 1 2101 transmits the uncompressed data packet (including the entire address and information related to the storage mode) to the link controller of die 2 2102. The information related to the label and the storage method is added in the form of two bits to the header of the data packet, such as the identification method. That is, 1 bit is sent to indicate that the address cache should allocate the current address and 2 bits, which indicate which of the 4 ways associated with a single index of the current transaction mapping should be written. The index is derived from the content of uncompressed transactions. In order to avoid conflicts, the sending link controller on die 1 2101 transmits and accesses the same index in the same order as the update/match on its cache before transmitting the current transaction address to the link controller on die 2 2102. All affairs. Otherwise, if a subsequent transaction modifies the index before processing the first transaction associated with the same index, the receiving link controller will not store or look up the correct location of the tag and mode associated with the transmitted first transaction.In step 350, the receiving link controller (ie, the link controller on die 2 2102) receives the entire address for processing and information related to the storage mode, and stores the tag on die 2 2102. The corresponding table 400 in the cache memory.Now refer to step 330 again, where the tag matches the tag and mode in the local cache memory, the link controller of die 1 2101 transmits the compressed data packet and the pointer bit identifying the storage mode of the tag to die 22102 The receiving link controller (step 360). For example, the link controller 211 of die 1 2101 receives a transaction that includes an address with a tag that matches the tag stored in the table 400 of the cache memory 212 of die 1 2101.In this example case, the link controller 211 of the die 1 2101 removes the higher bits (ie, the tag) from the address, and only the lower part of the address and the two indicating the way the tag is stored in the table 400 The bit pointer is transmitted to the receiving link controller on the die 22102. The receiving link controller (ie, the link controller of die 2 2102 in this example) reads the mode information and accesses the tag bit from the table 400 in the cache memory of die 2 2102 to recreate the entire link. The data packet is compressed for processing (step 370).The provided method can be implemented in a general-purpose computer, processor, or processor core. As examples, suitable processors include general-purpose processors, special-purpose processors, conventional processors, digital signal processors (DSP), multiple microprocessors, graphics processors, one or more microprocessors associated with DSP cores Controllers, controllers, microcontrollers, application specific integrated circuits (ASICs), field programmable gate array (FPGA) circuits, any other types of integrated circuits (ICs) and/or state machines. Such a processor can configure the manufacturing process for manufacturing by using the results of processed hardware description language (HDL) instructions and other intermediate data including netlists (such instructions can be stored on a computer-readable medium). The result of this processing can be a mask work, which is then used in a semiconductor manufacturing process to manufacture a processor implementing various aspects of this embodiment.The methods or processes provided herein can be implemented in a computer program, software, or firmware incorporated in a non-transitory computer-readable storage medium, so as to be executed by a general-purpose computer or processor. Examples of non-transitory computer-readable storage media include: read only memory (ROM), random access memory (RAM), registers, cache memory, semiconductor memory devices, magnetic media such as internal hard disks and removable disks, magneto-optical Media and optical media such as CD-ROM and Digital Versatile Disc (DVD).Also, although the above method or flowchart is described with respect to communication between two dies (for example, die 1 2101 and die 2 2102), communication between any dies can also be performed.This article discloses a device. The device includes a first link controller and a first cache operably connected to the first link controller. The first link controller receives a transaction from a source, the transaction including a memory address request for a memory location on the second device, where the memory address includes a first part and a second part. The first link controller determines whether the first part of the memory address is stored in the first cache. If the first part of the memory address is not stored in the first cache, the first link controller stores the first part of the memory address in the first cache, and combines the entire uncompressed memory address with the storage of the first part. The relevant information is transmitted to the second device. If the first part of the memory address is stored in the first cache, the first link controller will only include the compressed memory address of the second part of the memory address and the first part indicating that the memory address is stored in the first cache. The method identifier is transmitted to the second device.Disclosed herein is a non-transitory computer-readable medium having instructions recorded thereon that, when executed by a computing device, cause the computing device to perform operations. The operation includes receiving a transaction including a memory address request for a memory location on a second device from a source at the first device, where the memory address includes a first part and a second part. It is determined whether the first part of the memory address is stored in a cache located on the first device. If the first part of the memory address is not stored in the cache located on the first device, the first part of the memory address is stored in the cache of the first device, and the entire uncompressed memory address is combined with the storage of the first part. The relevant information is transmitted to the second device. If the first part of the memory address is stored in a cache located on the first device, then the compressed memory address including only the second part of the memory address and the first part indicating the memory address are stored in the cache of the first device The way the identifier is transmitted to the second device.This article discloses a system. The system includes a first device and a second device, wherein the first device includes a first link controller, a first cache, a first processor, and a first input/output (I/O) device, and the second device includes a first link controller, a first cache, and a first input/output (I/O) device. The second link controller, the second cache, the second processor and the second I/O device. The first link controller receives a transaction including a memory address request for a memory location on the second device from the first processor or the first I/O device, where the memory address includes a first part and a second part. The first link controller determines whether the first part of the memory address is stored in the first cache. If the first part of the memory address is not stored in the first cache, the first link controller stores the first part of the memory address in the first cache, and combines the entire uncompressed memory address with the storage of the first part. The relevant information is transmitted to the second device. If the first part of the memory address is stored in the first cache, the first link controller will only include the compressed memory address of the second part of the memory address and the first part indicating that the memory address is stored in the first cache. The method identifier is transmitted to the second device. After receiving the information related to the storage of the first part, the second link controller stores the information related to the storage of the first part in the second cache. Upon receiving the compressed memory address and the identifier indicating the manner in which the first part of the memory address is stored in the first cache, the second link controller retrieves from the second cache located on the second device based on the identifier The first part of the memory address, and reconstruct the entire uncompressed memory address.In some instances, after receiving the information related to the storage of the first part, the second device stores the information related to the storage of the first part in a cache located on the second device. In some instances, when the second device receives the compressed memory address and an identifier indicating the manner in which the first part of the address is stored in the cache of the first device, the second device retrieves from the cache located on the second device based on the identifier. Retrieve the first part of the memory address and reconstruct the entire uncompressed memory address. In some instances, rebuilding includes attaching the first part to the second part.In some instances, the first part is a tag that includes the higher bits of the uncompressed memory address. In some instances, the tag is stored in a location in a table located in the cache on the first device, the location being associated with the source that generated the transaction. In some instances, the tags are stored in a table associated with the way the tags are stored. In some instances, the label is stored in a row in the table, the row is identified by the table index stored by the label, and the label is also stored in a column corresponding to the row, which is associated with the manner in which the label is stored. In some instances, one or more index rows are associated with a particular source.In some examples, the source includes the processor or input/output (I/O) device of the first device. In some instances, the index row is associated with multiple mode columns. In some instances, the transaction type includes one or more of the following: program execution thread, read request, or write request. In some instances, the specific row index associated with the I/O device is associated with a read request or a write request.This article discloses a device. The device includes a first link controller and a first cache operably connected to the first link controller. The first link controller receives a transaction from a source, the transaction including a memory address request for a memory location on the second device, where the memory address includes a first part and a second part. The first link controller determines whether the first part of the memory address is stored in the first cache. If the first part of the memory address is not stored in the first cache, the first link controller stores the first part of the memory address in the first cache, and combines the entire uncompressed memory address with the storage of the first part. The relevant information is transmitted to the second device. If the first part of the memory address is stored in the first cache, the first link controller will only include the compressed memory address of the second part of the memory address and the first part indicating that the memory address is stored in the first cache. The method identifier is transmitted to the second device. |
Systems, apparatuses, and/or methods may define a priority of image memory traffic based on image sensor protocol metadata. For example, a metadata identifier may identify image sensor protocol metadata corresponding to an image sensor physical layer and/or an image sensor link layer. Moreover, a prioritizer may define a priority of the image memory traffic based on the image sensor protocol metadata. The priority may be used to control client access to dedicated memory and/or to shared memory. |
CLAIMSWe claim:1. A system to define a priority of image memory traffic comprising: an image sensor to provide image memory traffic,a metadata identifier to identify image sensor protocol metadata corresponding to one or more of an image sensor physical layer or an image sensor link layer, and a prioritizer to define a priority of the image memory traffic based on the image sensor protocol metadata.2. The system of claim 1 , wherein the prioritizer is to include one or more of,an impact determiner to determine an impact magnitude of the image sensor protocol metadata to define the priority of the image memory traffic, oran augmenter to augment a priority indicator based on the impact magnitude.3. The system of any one of claims 1 to 2, wherein the image sensor protocol metadata is to include one or more of image sensor physical layer protocol metadata or image sensor link layer protocol metadata, and wherein the prioritizer is to prioritize the image memory traffic based on one or more of the image sensor physical layer protocol metadata or the image sensor link layer protocol metadata. 4. An apparatus to define a priority of image memory traffic comprising: a metadata identifier to identify image sensor protocol metadata corresponding to one or more of an image sensor physical layer or an image sensor link layer, and a prioritizer to define a priority of image memory traffic based on the image sensor protocol metadata.5. The apparatus of claim 4, wherein the prioritizer is to include an impact determiner to determine an impact magnitude of the image sensor protocol metadata to define the priority of the image memory traffic.6. The apparatus of claim 4, wherein the prioritizer is to include an augmenter to augment a priority indicator based on an impact magnitude of the image sensor protocol metadata to define the priority of the image memory traffic.7. The apparatus of claim 6, wherein the priority indicator is to include memory bus protocol metadata including one or more of a round robin rule, a weight value, a priority value, a deadline, or a buffer occupancy.8. The apparatus of claim 4, further including one or more of, an arbitration controller to control client access to one or more of dedicated memory or shared memory, ora refresh controller to control refresh of one or more of the dedicated memory or the shared memory.9. The apparatus of any one of claims 4 to 8, wherein the image sensor protocol metadata is to include one or more of image sensor physical layer protocol metadata or image sensor link layer protocol metadata, and wherein the prioritizer is to define the priority of the image memory traffic based on one or more of the image sensor physical layer protocol metadata or the image sensor link layer protocol metadata.10. The apparatus of claim 9, wherein the image sensor physical layer protocol metadata is to include one or more of physical layer power state mode data, physical layer escape mode data, physical layer clock mode data, physical layer start of line data, or physical layer end of line data, and wherein the image sensor link layer protocol metadata is to include one or more of link layer start of line data, link layer end of line data, link layer start of frame data, or link layer end of frame data.1 1. At least one computer readable storage medium comprising a set of instructions, which when executed by a device, cause the device to:identify image sensor protocol metadata corresponding to one or more of an image sensor physical layer or an image sensor link layer; anddefine a priority of image memory traffic based on the image sensor protocol metadata.12. The at least one computer readable storage medium of claim 11, wherein the instructions, when executed, cause the device to determine an impact magnitude of the image sensor protocol metadata to define the priority of the image memory traffic.13. The at least one computer readable storage medium of claim 11, wherein the instructions, when executed, cause the device to augment a priority indicator based on an impact magnitude of the image sensor protocol metadata to define the priority of the image memory traffic.14. The at least one computer readable storage medium of claim 13, wherein the priority indicator is to include memory bus protocol metadata including one or more of a round robin rule, a weight value, a priority value, a deadline, or a buffer occupancy.15. The at least one computer readable storage medium of claim 11, wherein the instructions, when executed, cause the device to one or more of:control client access to one or more of dedicated memory or shared memory; orcontrol refresh of one or more of the dedicated memory or the shared memory.16. The at least one computer readable storage medium of any one of claims 1 1 to 15, wherein the image sensor protocol metadata is to include one or more of image sensor physical layer protocol metadata or image sensor link layer protocol metadata, and wherein the instructions, when executed, cause the device to define the priority of the image memory traffic based on one or more of the image sensor physical layer protocol metadata or the image sensor link layer protocol metadata. 17. The at least one computer readable storage medium of claim 16, wherein the image sensor physical layer protocol metadata is to include one or more of physical layer power state mode data, physical layer escape mode data, physical layer clock mode data, physical layer start of line data, or physical layer end of line data, and wherein the image sensor link layer protocol metadata is to include one or more of link layer start of line data, link layer end of line data, link layer start of frame data, or link layer end of frame data.18. A method to define a priority of memory traffic comprising:identifying image sensor protocol metadata corresponding to one or more of an image sensor physical layer or an image sensor link layer; anddefining a priority of image memory traffic based on the image sensor protocol metadata.19. The method of claim 18, further including determining an impact magnitude of the image sensor protocol metadata to define the priority of the image memory traffic.20. The method of claim 18, further including augmenting a priority indicator based on an impact magnitude of the image sensor protocol metadata to define the priority of the image memory traffic.21. The method of claim 20, wherein the priority indicator includes memory bus protocol metadata including one or more of a round robin rule, a weight value, a priority value, a deadline, or a buffer occupancy.22. The method of claim 18, further including one or more of:controlling client access to one or more of dedicated memory or shared memory; orcontrolling refresh of one or more of the dedicated memory or the shared memory.23. The method of any one of claims 18 to 22, wherein the image sensor protocol metadata includes one or more of image sensor physical layer protocol metadata or image sensor link layer protocol metadata, and wherein the method further includes defining the priority of the image memory traffic based on one or more of the image sensor physical layer protocol metadata or the image sensor link layer protocol metadata.24. The method of claims 23, wherein the image sensor physical layer protocol metadata includes one or more of physical layer power state mode data, physical layer escape mode data, physical layer clock mode data, physical layer start of line data, or physical layer end of line data, and wherein the image sensor link layer protocol metadata includes one or more of link layer start of line data, link layer end of line data, link layer start of frame data, or link layer end of frame data.25. An apparatus to define a priority of image memory traffic comprising means for performing the method of any one of claims 18 to 24. |
DEFINE A PRIORITY OF MEMORY TRAFFIC BASED ON IMAGESENSOR METADATACROSS-REFERENCE RELATED APPLICATIONSThe present application claims the benefit of priority to U. S. Non-Provisional Patent Application No. 15/200,074 filed on July 1, 2016.TECHNICAL FIELDEmbodiments generally relate to memory traffic prioritization. More particularly, embodiments relate to defining a priority of image memory traffic based on image sensor metadata.BACKGROUNDMemory, such as dynamic random access memory (DRAM), may be accessed by an image signal processor (ISP), a graphics processing unit (GPU), and a central processing unit (CPU), and so on. DRAM memory traffic generated from a camera interface may have high peak bandwidth as pixel data is offloaded to a system-on- chip (SoC) and may have a negative impact on system function and/or system power if not sufficiently managed. For example, a bad priority indicator of memory traffic used to arbitrate memory access among clients may starve the clients of memory bandwidth (e.g., restrict access to shared memory) and/or may cause performance issues such as frame stuttering. Moreover, DRAM may be forced into unnecessarily high power states (e.g., reduce self-refresh) when a bad indicator of high priority traffic is used.A larger on-chip buffer may be implemented to mitigate the impact of a bad priority indicator and/or multiple clients. In this regard, image data may buffer-up for a longer time period. Also, shared memory may be left in a self-refresh state for a longer time period. Moreover, memory traffic from one client may be buffered while another client accesses shared memory. A larger buffer may, however, occupy valuable silicon area and/or may increase cost or complexity. Additionally, weighing based on traffic pattern estimation may be used in an attempt to address unnecessary restriction of access to shared memory. Weighing based on traffic pattern estimation, however, may be labor and time intensive, and/or may require complex retiming for different camera sensors. Thus, there is considerable room for improvement to prioritize memory traffic.BRIEF DESCRIPTION OF THE DRAWINGSThe various advantages of the embodiments will become apparent to one skilled in the art by reading the following specification and appended claims, and by referencing the following drawings, in which:FIG. 1 is a block diagram of an example of a system to define a priority of memory traffic according to an embodiment;FIG. 2 is a flowchart of an example of a method to define a priority of memory traffic according to an embodiment; andFIG. 3 is a block diagram of an example of a computing device according to an embodiment.DESCRIPTION OF EMBODIMENTSTurning now to FIG. 1, a system 10 is shown to define a priority of memory traffic according to an embodiment. The system 10 may include a computing platform such as a laptop, a personal digital assistant (PDA), a media content player, a mobile Internet device (MID), a computer server, a gaming platform, any smart device such as a wireless smart phone, a smart tablet, a smart TV, a smart watch, and so on. In the illustrated example, the system 10 includes a mobile computing platform (e.g., a smart phone) that may capture, process, store, provide, and/or display an image.The system 10 may include communication functionality for a wide variety of purposes such as, for example, cellular telephone (e.g., Wideband Code Division Multiple Access/W-CDMA (Universal Mobile Telecommunications System/UMTS), CDMA2000 (IS-856/IS-2000), etc.), WiFi (Wireless Fidelity, e.g., Institute of Electrical and Electronics Engineers/IEEE 802.11-2007, Wireless Local Area Network/LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications), LiFi (Light Fidelity, e.g., IEEE 802.15-7, Wireless Local Area Network/LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications), 4G LTE (Fourth Generation Long Term Evolution), Bluetooth (e.g., IEEE 802.15.1-2005, Wireless Personal Area Networks), WiMax (e.g., IEEE 802.16- 2004, LAN/MAN Broadband Wireless LANS), Global Positioning System (GPS), spread spectrum (e.g., 900 MHz), NFC (Near Field Communication, ECMA-340, ISO/IEC 18092), and other radio frequency (RF) purposes. Thus, the system 10 may capture, process, store, provide, and/or display image data locally on the system 10 and/or remotely off the system 10.As shown in FIG. 1, the system 10 includes an image sensor 12 to capture an image for a video, a photograph, and so on. The image sensor 12 may capture electromagnetic radiation in one or more spectrums such as the infrared spectrum, the visible light spectrum, etc. In one example, the image sensor 12 includes a complementary metal-oxide semiconductor (CMOS) image sensor that may provide pixel data corresponding to captured electromagnetic radiation, wherein the pixel data may include luma data (e.g., light quantity) for an image, chroma data (e.g., light color) for an image, depth data for objects in an image, thermal data for objects in an image, and so on.The system 12 further includes an integrated circuit (IC) 14 to allow the pixel data to be transmitted from the image sensor 12 to memory 16. In the illustrated example, the IC 14 is a part of a client 18 that is allowed access to the memory 16. The client 18 may include, for example, an IC chip such as a system on chip (e.g., a microcontroller, etc.), a processor such as a baseband processor (e.g., an applications processor), an image signal processor (ISP), a graphics processing unit (CPU), a central processing unit (CPU), a virtual processing unit (VPU), and so on. The client 18 may have dedicated access when the memory 16 is dedicated memory and/or may have shared access when the memory 16 is shared memory. Thus, the memory 16 may include dedicated and/or shared random access memory (RAM), read only memory (ROM), programmable ROM (PROM), firmware, flash memory, dynamic RAM (DRAM), etc.The system 10 implements an image sensor protocol 20 to communicatively couple the IC 14 with the image sensor 12 to transfer the pixel data from the image sensor 12 to the client 18. The image sensor protocol 20 may include an image sensor physical layer protocol (ISPLP) to specify image sensor physical layer communication such as, for example, voltages to accomplish transmission of pixel data from the image sensor 12, speed at which pixel data from the image sensor 12 is to be transmitted, a clock mechanism for pixel data from the image sensor 12, and so on. Additionally, the image sensor protocol 20 may include an image sensor link layer protocol (ISLLP) to specify image sensor link layer communication such as, for example, how data from the image sensor 12 is to be packaged (e.g., packetized, etc.) for transmission.Notably, the ISPLP and the ISLLP may each support data other than pixel data corresponding to image sensor physical layer communication and image sensor link layer communication, respectively, which may be treated as image sensor protocol metadata to prioritize image memory traffic. Accordingly, while the ISPLP may be manufacturer-specific, image memory traffic may still be prioritized using image sensor protocol metadata from any physical layer protocol such as, for example, Mobile Industry Processor Interface (MIPI®, a trademark of MIPI Alliance) C-PHY, D-PHY, M-PHY, low voltage differential signaling (LVDS), service integration multiple access (SIMA), etc. Similarly, while the ISLLP may be manufacturer- specific, image memory traffic may still be prioritized using image sensor protocol metadata from any link layer protocol such as, for example, MIPI camera serial interface (CSI)-2, CSI-3, UniPro, M-PCIe, etc.ISPLP metadata may include, for example, image sensor power state mode data, image sensor escape mode data, image sensor clock mode data, and so on. The image sensor power state mode data may include an indication of an ultra-low power state (ULPS) exit or entry, an escape mode (EM) exit or entry, a high-speed clock mode (HSCM) exit or entry, and so on. For example, the image sensor 12 may use appropriate physical voltages to indicate entry to the ULPS and/or exit from the ULPS, which may be treated as ISPLP metadata. The image sensor 12 may also transmit side-band data other than pixel data (e.g., senor data such as gyroscope data, etc.) using low power transmission through entry to the EM, wherein the low power may be treated as ISPLP metadata. The image sensor 12 may also prepare to transmit a relatively large amount of traffic through entry to the HSCM by enabling a highspeed clock and may disable the high-speed clock during in quiescent periods through exit from the HSCM, wherein the clock enablement or disablement may be treated as ISPLP metadata.ISPLP metadata may also include start of line (SOL) data and/or end of line (EOL) data. For example, pixel lines may be forwarded by the image sensor 12 a line at a time in a high-speed transmission. Thus, a high-speed voltage associated with an SOL and/or a low-speed voltage associated with an EOL may be treated as ISPLP metadata. In another example, some physical layer protocols support transmission of short packets to indicate an SOL and an EOL. Thus, the short packets at the image sensor physical level corresponding to an SOL and an EOL may be treated as ISPLP metadata.ISLLP metadata may include packetized data such as synchronization short packet data types. In one example, an SOL and/or an EOL may be recovered by the ISLLP from the ISPLP. Thus, SOL data (e.g., line start code in CSI) and/or EOL data (e.g., line end code in CSI) may be treated as ISLLP metadata. In another example, start of frame (SOF) data (e.g., frame start code in CSI) and end of frame (EOF) data (e.g., frame end code in CSI) may be treated as ISLLP metadata. Thus, ISLLP metadata that may be used to prioritize image memory traffic may indicate whether image data (e.g., pixel data) is associated with a beginning of a line of pixels, an end of a line of pixels, a beginning of a frame of an image, a continuation of a frame of an image, an end of a frame of an image, and so on.The system 10 implements a memory bus protocol 22 to communicatively couple the IC 14 with the memory 16 to transfer image data from the client 18 to the memory 16. The memory bus protocol 22 may specify, for example, how memory traffic is to be transferred from the client 18 to the memory 16. While the memory bus protocol 22 may be manufacturer-specific, image memory traffic may still be prioritized using image sensor protocol metadata across any memory bus protocol such as, for example, open core protocol (OCP), Advanced extensible Interface (AXI), Intel® on-chip system fabric (IOSF), and so on. Similarly, while the memory bus protocol 22 may implement various arbitration schemes to prioritize memory traffic, image memory traffic may still be prioritized using image sensor protocol metadata across any arbitration scheme such as, for example, a round robin scheme, a weight scheme, a priority scheme, a deadline scheme, a buffer occupancy scheme, and so on.In the illustrated example, the system 10 includes an arbitrator 24 to control access to the memory 16 over the memory bus protocol 22 by a single client when the memory 16 includes dedicated memory and/or by two or more clients when the memory 16 includes shared memory. In one example, the arbitrator 24 may implement a relatively simple round robin arbitration scheme using a default round robin rule to provide equal image memory traffic priority resulting in equal access by all clients sharing the memory 16. The arbitrator 24 may further implement a weight scheme using a weight value to arbitrate client access to the memory 16. In one example, the arbitrator 24 may compare an n-bit weight value (e.g. 4-bit value, etc.) assigned to each client to determine a percentage of memory bandwidth assigned to particular clients. Thus, for example, the arbitrator 24 may allow a GPU having a weight value of all ones to receive a majority of memory access relative to an ISP having a weight value of all zeros. In addition, the GPU and the ISP may each drive a same 4-bit value to force round robin memory access based on unity of weight values.Similarly, OCP may specify an n-bit (e.g., 2-bit, etc.) priority scheme wherein priority of a memory request (e.g., read/write) may vary from lowest priority based on an all zero priority value to highest priority based on an all ones priority value. Thus, the arbitrator 24 may implement the 2-bit priority scheme to prioritize image memory traffic and to arbitrate client access to the memory 16 using the priority values. The arbitrator 24 may also implement a deadline arbitration scheme using a deadline to arbitrate client access to the memory 16. For example, IOSF may specify an n-bit (e.g., 16-bit, etc.) deadline scheme wherein an offset may be driven by a client from a global timer (e.g., constantly running to indicate a present time on an SoC) with every memory request, which specifies a time the memory request is to be completed (e.g., a deadline). Thus, for example, a memory request may specify completion of a memory transaction within time t = 105 and the client 18 may drive 105 on the deadline, which the arbitrator 24 may use to arbitrate client access to the memory 16. A memory controller 26 (e.g., DRAM controller, etc.) may also use the deadline to manage memory refresh and provide relatively good power savings performance, discussed below.In a further example, the arbitrator 24 may implement a buffer occupancy scheme using a buffer occupancy (e.g., a watermark value) and an indicator of traffic urgency (e.g., a priority value, a deadline, etc.) to arbitrate client access to the memory 16. For example, a first-in-first-out (FIFO) buffer at an output of the client 18 (e.g., an ISP) may be "N" entries deep (e.g., 100 entries deep), and the client 18 may drive an offset from the global timer based on a watermark value. A watermark of "X" entries (e.g., 25 entries) in IOSF may, for example, drive a very non-urgent deadline to a moderately urgent deadline, while a watermark of "Y" entries (e.g., 50 entries) may drive a moderately urgent deadline to a very urgent deadline. In this case, the client 18 may drive a very far out deadline (e.g., t = 105) when a global timer is presently at t = 5 and the FIFO is only 25 entries on the 100-deep FIFO, which the arbitrator 24 may use to arbitrate client access to the memory 16. In addition, the client 18 may drive a relatively more urgent deadline (e.g., t = 50) when 50 entries on the 100-deep FIFO are reached, which the arbitrator 24 may use to arbitrate client access to the memory 16. The memory controller 26 may also use the deadline to manage memory refresh, discussed below.The system 10 further includes an apparatus 28 that may have logic (e.g., logic instructions, configurable logic, fixed-functionality logic hardware, etc.) configured to implement any of the herein mentioned processes including, for example, memory traffic prioritization, arbitration of memory access, etc. Notably, any of the herein mentioned processes may be implemented based on image sensor protocol metadata corresponding to an image sensor physical layer (e.g., ISPLP metadata) and/or an image sensor link layer (e.g., ISLLP metadata), with or without traditional indicators.In the illustrated example, the apparatus 28 includes a metadata identifier 30 to identify ISPLP metadata and/or ISLLP metadata. The metadata identifier 30 may identify, for example, physical layer image sensor power state mode data, physical layer image sensor escape mode data, physical layer image sensor clock mode data, physical layer SOL data, physical layer EOL data, and so on. For example, the metadata identifier 30 may identify a voltage for an entry to or exit from a ULPS, an entry to or exit from an EM, an enablement or disablement of a high-speed clock for an entry to or exit from a HSCM, a voltage or a short packet for an SOL or an EOL, etc. The metadata identifier 30 may also identify, for example, link layer SOL data, link layer EOL data, link layer SOF data, link layer EOF data, and so on. For example, the metadata identifier 30 may identify a line start code, a line end code, a frame start code, a frame end code, etc.The apparatus 28 further includes a prioritizer 32 to define a priority for image memory traffic based on image sensor metadata. In one example, image data (e.g., pixel data) may be transferred from the image sensor 12 to the client 18 at a rate of 30 frames per second (fps), or a frame every 33 ms, and the metadata identifier 30 may identify an EOF at 5 ms. The prioritizer 32 may, in response, determine that no traffic will be received for 28 ms until a next frame starts based on the EOF and lower a priority of the memory traffic for the client 18. Thus, while a FIFO may traditionally become full and run out of buffering space based on an indication of high priority, the prioritizer 32 may deprioritize memory traffic to allow other memory traffic to be buffered and not dropped. Also, clients may not unnecessarily be restricted from access to the memory 16 between frames. The metadata identifier 30 may further identify a SOF and the prioritizer 32 may, in response, determine that FIFOs may not be full at the beginning of a frame and that image data from the image sensor 12 will be streamed relatively soon. Thus, the prioritizer 32 may raise a priority of the image data in response to the SOF.The prioritizer 32 may further relatively increase a priority (e.g., prioritize) or relatively decrease a priority (e.g., deprioritize) of image data based on an entry to or exit from a ULPS, an entry to or exit from an EM, an enablement or disablement of a high-speed clock for an entry to or exit from a HSCM, a voltage or a short packet for an SOL or an EOL, a line start code, a line end code, and so on. In one example, the prioritizer 32 may deprioritize memory traffic when an EOL is identified since there may be a data gap between lines. In another example, the prioritizer 32 may prioritize memory traffic when a SOL is identified since there may be a large amount of data about to stream. Similarly, the prioritizer 32 may deprioritize memory traffic when a ULPS entry is identified since there may be a relatively small amount of incoming data, and/or may prioritize memory traffic when a ULPS exit is identified since there may be a relatively large amount of incoming data.The apparatus 28 further includes an impact determiner 34 to determine an impact magnitude of image sensor protocol metadata. An impact magnitude may be based on, for example, an operational characteristic for an image device, an image sensor protocol to be implemented, a memory bus protocol to be implemented, an arbitration scheme to be implemented, and so on. In one example, the impact determiner 34 may determine an impact magnitude based on a scanning characteristic of the image sensor 12, a communication characteristic of the image sensor 12, and so on. For example, the impact determiner 34 may consider that the image sensor 12 scans across an image when offloading pixels to an SoC, left to right, and line-by-line, all the way down an image.Accordingly, the impact determiner 34 may be aware that the image sensor 12 creates two blanking intervals. A horizontal blanking interval may correspond to an amount of time it takes between sending a previous line to sending a left-most pixel of a next line. In addition, a vertical blanking interval may correspond to an amount of time it takes between sending a last line of a frame to sending a first line of a next frame. Thus, the vertical blanking interval may be relatively smaller (e.g., orders of magnitude) than the horizontal blanking interval. The impact determiner 34 may, therefore, determine that an impact magnitude of an SOF and an EOF may be large relative to an impact magnitude of an SOL and an EOL based the scanning operational characteristic of the image sensor 12 and/or the transmission characteristic of the image sensor 12.The prioritizer 32 further includes an augmenter 36 to augment a priority indicator based on an impact magnitude. Notably, the augmenter 36 may augment any priority indicator of any arbitration scheme implemented over any memory bus protocol using an impact magnitude for available and/or selected image sensor protocol metadata. In one example, the impact determiner 34 may determine an impact magnitude is to be "A" for an SOF and an EOF, wherein A may be a particular value based on a particular priority indicator of a particular arbitration scheme, and wherein the augmenter 36 may modify a priority indicator by ± A (e.g., +A for an SOF and -A for an EOF).Similarly, for example, the impact determiner 34 may determine an impact magnitude is to be "B" for an SOL and an EOL, and the augmenter 36 may modify a priority indicator by ± B (e.g., +B for an SOL and -B for an EOL). The impact determiner 34 may further determine an impact magnitude is to be "C" for a ULPS exit and a ULPS entry, and the augmenter 36 may modify a priority indicator by ± C (e.g., +C for an exit and -C for an entry). The impact determiner 34 may also determine an impact magnitude is to be "D" for an EM exit and an EM entry, and the augmenter 36 may modify a priority indicator by ± D (e.g., +D for an exit and -D for an entry). The impact determiner 34 may further determine an impact magnitude is to be "E" for an HSCM exit and an HSCM entry, and the augmenter 36 may modify a priority indicator by ± E (e.g., +E for an entry and -E for an exit).Accordingly, the augmenter 36 may add an impact magnitude to a default round robin rule in a round robin arbitration scheme. In this regard, the round robin arbitration scheme may be converted to a relatively more efficient arbitration scheme. In addition, the augmenter 36 may augment an n-bit weight value of a weight scheme. For example, the augmenter 36 may modify a 4-bit weight value of 8 with a value of - 1 for an ISP when an EOL is identified and an impact magnitude is B = 1. Thus, for example, the arbitrator 24 would allow a GPU having a 4-bit value of 8 to access the memory 16 more often than the ISP having the 4-bit weight value of 7.In a further example, the augmenter 36 may augment a 4-bit weight value of 8 with a value of -8 for an ISP when an EOF is identified and an impact magnitude is A = 8. Notably, an SOF and an EOF may cause a relatively larger modulation (e.g., A = 8) than an SOL and an EOL (e.g., B = 1) due to a relatively large horizontal blanking interval for the SOL and the EOL. Moreover, the arbitrator 24 may implement a best effort for ISP memory traffic while arbitrating memory access by other clients to the memory 16 based on image traffic priority. For example, the arbitrator 24 may allow a GPU having a 4-bit weight value of 8 four more operations involving the memory 16 than a CPU having a 4-bit weight value of 4, wherein the CPU may be allowed more operations involving the memory 16 than the ISP under best effort conditions.In another example, the augmenter 36 may augment a priority indicator of traffic urgency in a buffer occupancy scheme. For example, the augmenter 36 may augment a deadline t = 105 (e.g., not very urgent when current time t = 0) with the value of -95 when an SOF is identified and an impact magnitude is A = 95. Thus, image data may become relatively urgent with a deadline t = 10 as soon as an SOF is identified to minimize lag. For example, the arbitrator 24 may allow access to the memory 16 relatively faster based on the deadline t = 10. In addition, the augmenter 36 may augment the deadline t = 105 with the value of 95 when an EOF is identified and an impact magnitude is A = 95. Thus, the image data may become relatively less urgent with a deadline t = 200 as soon as an EOF is identified to prevent unnecessary restriction of access by other clients. For example, the arbitrator 24 may allow access to other clients with more urgent traffic that are sharing the memory 16 based on the deadline t = 200.Additionally, the impact determiner 34 may determine an impact magnitude is to be a relative values such as "minimum", "lowest", "maximum", and/or "highest", and the augmenter 36 may modify a priority indicator to a maximum possible value for "maximum", a minimum possible value for "minimum", and so on. In another example, the impact determiner 34 may determine an impact magnitude is to be "A" for an SOF and is to be "A"' for an EOF, wherein A' may have the same units as A and be a different value (e.g., A = 5, A' = -3). In this regard, the augmenter 36 may modify a priority indicator by ±A for the SOF and by ±A' for the EOF.The apparatus 24 further includes an arbitrator controller 38 to control client access to the memory 16. The arbitrator controller 38 may, for example, communicate an impact magnitude for image memory traffic to the arbitrator 24 that may, in response, arbitrate access to the memory 16 based on the impact magnitude. The arbitrator controller 38 may also, for example, communicate an augmented priority indicator for image memory traffic augmentation to the arbitrator 24 that may, in response, arbitrate access to the memory 16 based on the augmented priority indicator. The arbitrator controller 38 may also communicate that image memory traffic is prioritized or deprioritized and the arbitrator 24 may, in response, arbitrate access to the memory 16 based on the prioritization or deprioritization.The arbitrator controller 38 may, for example, communicate a deprioritization to the arbitrator 24 that may, in response, cause memory traffic to be buffer-up for a period of time (e.g., when memory is in a self-refresh state) to provide a relatively better power profile and/or burst access to the memory 16 (e.g. , when memory is not in a self-refresh state) to provide a relatively better performance profile in a dedicated memory implementation. The arbitrator controller 38 may also, for example, communicate a deprioritization to the arbitrator 24 that may, in response, minimize unnecessary restriction of access to the memory 16 by other clients in a shared memory implementation. The arbitrator controller 38 may also communicate, for example, a prioritization of memory traffic to the arbitrator 24 that may, in response, minimize lag by providing access to the memory 16.The apparatus 28 further includes a refresh controller 40 to control memory refresh by the memory controller 26. The refresh controller 40 may, for example, generate a control enter signal based on a deprioritization of memory traffic to cause the refresh controller 26 to enter the memory 16 to a self-refresh state for a relatively better power profile. The refresh controller 40 may also, for example, generate a control exit signal based on a prioritization of memory traffic to cause the refresh controller 26 to exit the memory 16 from a self-refresh state for a relatively better performance profile. Moreover, the refresh controller 40 may coordinate with the arbitrator controller 38 to concurrently modulate a self-refresh state of the memory 16 and control access to the memory 16 to provide a relatively better power and performance profile.While examples have provided various components of the system 10 for illustration purposes, it should be understood that one or more components of the system 10 may reside in the same and/or different physical and/or virtual locations, may be combined, omitted, bypassed, re-arranged, and/or be utilized in any order. In one example, one or more components of the apparatus 28 may physically reside on the same computing platform as one or more components of the client 18, the arbitrator 24, the memory controller 26, and so on. In another example, one or more components of the apparatus 28 may be distributed among various computing platforms to provide distributed prioritization. Moreover, any or all components of the system 10 may be automatically implemented (e.g., without human intervention, etc.). For example, the metadata identifier 30 may automatically identify image sensor protocol metadata.Turning now to FIG. 2, a method 42 is shown to define a priority of memory traffic according to an embodiment. The method 42 may be implemented as a module or related component in a set of logic instructions stored in a non-transitory machine- or computer-readable storage medium such as random access memory (RAM), read only memory (ROM), programmable ROM (PROM), firmware, flash memory, etc., in configurable logic such as, for example, programmable logic arrays (PLAs), field programmable gate arrays (FPGAs), complex programmable logic devices (CPLDs), in fixed-functionality hardware logic using circuit technology such as, for example, application specific integrated circuit (ASIC), complementary metal oxide semiconductor (CMOS) or transistor-transistor logic (TTL) technology, or any combination thereof. For example, computer program code to carry out operations shown in the method 42 may be written in any combination of one or more programming languages, including an object oriented programming language such as JAVA, SMALLTALK, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages.Illustrated processing block 44 provides for identifying image sensor protocol metadata, which may correspond to an image sensor physical layer and/or an image sensor link layer. Block 44 may identify, for example, physical layer image sensor power state mode data, physical layer image sensor escape mode data, physical layer image sensor clock mode data, physical layer start of line data, physical layer end of line data, and so on. Block 44 may also identify, for example, link layer start of line data, link layer end of line data, link layer start of frame data, link layer end of frame data, and so on. For example, block 44 may identify a line start code, a line end code, a frame start code, a frame end code, and so on.Illustrated processing block 46 provides for defining a priority of image memory traffic based on image sensor protocol metadata. Block 46 may, for example, prioritize image data by increasing relative priority of image data. Block 46 may also, for example, deprioritize image data by decreasing relative priority of image data. Illustrated processing block 48 provides for determining an impact magnitude of image sensor protocol metadata. Block 48 may, for example, determine an impact magnitude based on a scanning characteristic of an image sensor, a communication characteristic of an image sensor, and so on. For example, block 48 may determine that an impact magnitude of a start of frame and an end of frame may be large relative to an impact magnitude of a start of line and an end of line based a scanning operational characteristic of an image sensor, a transmission characteristic of an image sensor, and so on.Illustrated processing block 50 provides for augmenting a priority indicator based on an impact magnitude. The priority indicator may include, for example, memory bus protocol metadata such as a round robin rule, a weight value, a priority value, a deadline, and/or a buffer occupancy. Thus, block 50 may augment (e.g., modify, modulate, etc.) a round robin rule, a weight value, a priority value, a deadline, and/or a buffer occupancy. In addition, illustrated processing block 52 provides for controlling client access to memory. Block 52 may, for example, control client access based on an impact magnitude. Block 52 may further control client access based on an augmented priority indicator. In addition, block 52 may control client access to memory based on a prioritization of image memory traffic, a deprioritization of image memory traffic, etc.Illustrated processing block 54 provides for controlling memory refresh. Block 54 may, for example, generate a control enter signal based on a deprioritization of memory traffic to cause memory to enter to a self-refresh state for a relatively better power profile. Block 54 may also, for example, generate a control exit signal to cause memory to exit from a self-refresh state for a relatively better performance profile. Moreover, block 54 may coordinate with block 52 to concurrently modulate a self-refresh state for memory and control access to memory to provide a relatively better power and performance profile.While independent blocks and/or a particular order has been shown for illustration purposes, it should be understood that one or more of the blocks of the method 42 may be combined, omitted, bypassed, re-arranged, and/or flow in any order. In addition, any or all blocks of the method 42 may include further techniques, including techniques to prioritize image traffic data, control access to memory, and so on. Moreover, any or all blocks of the method 42 may be automatically implemented (e.g., without human intervention, etc.). For example, block 44 may automatically identify image sensor protocol metadata. Turning now to FIG. 3, a computing device 110 is shown according to an embodiment. The computing device 110 may be part of a platform having computing functionality (e.g., personal digital assistant/PDA, notebook computer, tablet computer), communications functionality (e.g., wireless smart phone), imaging functionality, media playing functionality (e.g., smart television/TV), wearable functionality (e.g., watch, eyewear, headwear, footwear, jewelry) or any combination thereof (e.g., mobile Internet device/MID). In the illustrated example, the device 110 includes a battery 112 to supply power to the device 110 and a processor 114 having an integrated memory controller (IMC) 116, which may communicate with system memory 118. The system memory 118 may include, for example, dynamic random access memory (DRAM) configured as one or more memory modules such as, for example, dual inline memory modules (DIMMs), small outline DIMMs (SODIMMs), etc.The illustrated device 110 also includes a input output (IO) module 120, sometimes referred to as a Southbridge of a chipset, that functions as a host device and may communicate with, for example, a display 122 (e.g., touch screen, liquid crystal display/LCD, light emitting diode/LED display), a sensor 124 (e.g., touch sensor, accelerometer, GPS, biosensor, etc.), an image capture device 125 (e.g., a camera, etc.), and mass storage 126 (e.g., hard disk drive/HDD, optical disk, flash memory, etc.). The processor 114 and the IO module 120 may be implemented together on the same semiconductor die as a system on chip (SoC).The illustrated processor 114 may execute logic 128 (e.g., logic instructions, configurable logic, fixed-functionality logic hardware, etc., or any combination thereof) configured to implement any of the herein mentioned processes and/or technologies, including one or more components of the system 10 (FIG. 1) and/or one or more blocks of the method 42 (FIG. 2), discussed above. In addition, one or more aspects of the logic 128 may alternatively be implemented external to the processor 114. Thus, the computing device 110 may define a priority of image memory traffic, arbitrate memory access, etc.Additional Notes and Examples:Example 1 may include a system to define a priority of image memory traffic comprising an image sensor to provide image memory traffic, a metadata identifier to identify image sensor protocol metadata corresponding to one or more of an image sensor physical layer or an image sensor link layer, and a prioritizer to define a priority of the image memory traffic based on the image sensor protocol metadata.Example 2 may include the system of Example 1, wherein the prioritizer is to include one or more of an impact determiner to determine an impact magnitude of the image sensor protocol metadata to define the priority of the image memory traffic, or an augmenter to augment a priority indicator based on the impact magnitude.Example 3 may include the system of any one of Examples 1 to 2, wherein the image sensor protocol metadata is to include one or more of image sensor physical layer protocol metadata or image sensor link layer protocol metadata, and wherein the prioritizer is to prioritize the image memory traffic based on one or more of the image sensor physical layer protocol metadata or the image sensor link layer protocol metadata.Example 4 may include an apparatus to define a priority of image memory traffic comprising a metadata identifier to identify image sensor protocol metadata corresponding to one or more of an image sensor physical layer or an image sensor link layer, and a prioritizer to define a priority of image memory traffic based on the image sensor protocol metadata.Example 5 may include the apparatus of Example 4, wherein the prioritizer is to include an impact determiner to determine an impact magnitude of the image sensor protocol metadata to define the priority of the image memory traffic.Example 6 may include the apparatus of any one of Examples 1 to 5, wherein the prioritizer is to include an augmenter to augment a priority indicator based on an impact magnitude of the image sensor protocol metadata to define the priority of the image memory traffic.Example 7 may include the apparatus of any one of Examples 1 to 6, wherein the priority indicator is to include memory bus protocol metadata including one or more of a round robin rule, a weight value, a priority value, a deadline, or a buffer occupancy.Example 8 may include the apparatus of any one of Examples 1 to 7, further including one or more of an arbitration controller to control client access to one or more of dedicated memory or shared memory, or a refresh controller to control refresh of one or more of the dedicated memory or the shared memory.Example 9 may include the apparatus of any one of Examples 1 to 8, wherein the image sensor protocol metadata is to include one or more of image sensor physical layer protocol metadata or image sensor link layer protocol metadata, and wherein the prioritizer is to define the priority of the image memory traffic based on one or more of the image sensor physical layer protocol metadata or the image sensor link layer protocol metadata.Example 10 may include the apparatus of any one of Examples 1 to 9, wherein the image sensor physical layer protocol metadata is to include one or more of physical layer power state mode data, physical layer escape mode data, physical layer clock mode data, physical layer start of line data, or physical layer end of line data, and wherein the image sensor link layer protocol metadata is to include one or more of link layer start of line data, link layer end of line data, link layer start of frame data, or link layer end of frame data.Example 11 may include at least one computer readable storage medium comprising a set of instructions, which when executed by a device, cause the device to identify image sensor protocol metadata corresponding to one or more of an image sensor physical layer or an image sensor link layer, and define a priority of image memory traffic based on the image sensor protocol metadata.Example 12 may include the at least one computer readable storage medium of Example 11, wherein the instructions, when executed, cause the device to determine an impact magnitude of the image sensor protocol metadata to define the priority of the image memory traffic.Example 13 may include the at least one computer readable storage medium of any one of Examples 11 to 12, wherein the instructions, when executed, cause the device to augment a priority indicator based on an impact magnitude of the image sensor protocol metadata to define the priority of the image memory traffic.Example 14 may include the at least one computer readable storage medium of any one of Examples 11 to 13, wherein the priority indicator is to include memory bus protocol metadata including one or more of a round robin rule, a weight value, a priority value, a deadline, or a buffer occupancy.Example 15 may include the at least one computer readable storage medium of any one of Examples 11 to 14, wherein the instructions, when executed, cause the device to one or more of control client access to one or more of dedicated memory or shared memory, or control refresh of one or more of the dedicated memory or the shared memory. Example 16 may include the at least one computer readable storage medium of any one of Examples 11 to 15, wherein the image sensor protocol metadata is to include one or more of image sensor physical layer protocol metadata or image sensor link layer protocol metadata, and wherein the instructions, when executed, cause the device to define the priority of the image memory traffic based on one or more of the image sensor physical layer protocol metadata or the image sensor link layer protocol metadata.Example 17 may include the at least one computer readable storage medium of any one of Examples 11 to 16, wherein the image sensor physical layer protocol metadata is to include one or more of physical layer power state mode data, physical layer escape mode data, physical layer clock mode data, physical layer start of line data, or physical layer end of line data, and wherein the image sensor link layer protocol metadata is to include one or more of link layer start of line data, link layer end of line data, link layer start of frame data, or link layer end of frame data.Example 18 may include a method to define a priority of memory traffic comprising identifying image sensor protocol metadata corresponding to one or more of an image sensor physical layer or an image sensor link layer, and defining a priority of image memory traffic based on the image sensor protocol metadata.Example 19 may include the method of Example 18, further including determining an impact magnitude of the image sensor protocol metadata to define the priority of the image memory traffic.Example 20 may include the method of any one of Examples 18 to 19, further including augmenting a priority indicator based on an impact magnitude of the image sensor protocol metadata to define the priority of the image memory traffic.Example 21 may include the method of any one of Examples 18 to 20, wherein the priority indicator includes memory bus protocol metadata including one or more of a round robin rule, a weight value, a priority value, a deadline, or a buffer occupancy.Example 22 may include the method of any one of Examples 18 to 21, further including one or more of controlling client access to one or more of dedicated memory or shared memory, or controlling refresh of one or more of the dedicated memory or the shared memory.Example 23 may include the method of any one of Examples 18 to 22, wherein the image sensor protocol metadata includes one or more of image sensor physical layer protocol metadata or image sensor link layer protocol metadata, and wherein the method further includes defining the priority of the image memory traffic based on one or more of the image sensor physical layer protocol metadata or the image sensor link layer protocol metadata.Example 24 may include the method of any one of Examples 18 to 23, wherein the image sensor physical layer protocol metadata includes one or more of physical layer power state mode data, physical layer escape mode data, physical layer clock mode data, physical layer start of line data, or physical layer end of line data, and wherein the image sensor link layer protocol metadata includes one or more of link layer start of line data, link layer end of line data, link layer start of frame data, or link layer end of frame data.Example 25 may include an apparatus to define a priority of image memory traffic comprising means for performing the method of any one of Examples 18 to 24.Notably, camera sensor communication may require availability of image sensor protocol metadata. Accordingly, embodiments may operate across a wide range of physical layer specifications and protocol layer specifications. Similarly, memory arbitration may be applicable across a wide range of protocols and schemes. A variety of arbitration parameters may be derived from image sensor protocol metadata based on, for example, available control over an arbitration scheme.Embodiment may utilize, for example, image sensor protocol metadata (e.g., MIPI D-PHY, CSI, etc.) solely and/or in conjunction with traditional priority indicators to intelligently prioritize pixel traffic from a camera sensor for DRAM arbitration. In one example, embodiments may avoid use of large on-chip data buffers that may minimize silicon area. In addition, embodiments may avoid traffic pattern estimation.Embodiments may utilize native image sensor protocol data (e.g., native pattern indicators) directly from a camera sensor to allow DRAM arbitration priority to be adjusted more quickly. Embodiments may, for example, raise priority as soon a line of image data begins transmission rather than waiting for a pixel buffer occupancy to exceed a predetermined watermark, may lower priority immediately upon identifying an end of frame, and so on. For example, embodiments may raise a deadline to urgent when a SOL short packet is identified on a MIPI CSI D-PHY physical interface, and lower the deadline to non-urgent when an EOL short packet is seen on the MIPI CSI D-PHY physical interface. Thus, techniques described herein provide for relatively good performance (e.g., relatively better bandwidth). For example, priority may not be raised soon enough when camera data is streaming and an arbitrator is waiting to reach a predetermined watermark (e.g., bad priority indicator may lags data input). Embodiments may, however, raise priority as soon as data is to be input for relatively better performance in dedicated memory and/or shared memory implementations. Moreover, client starving may be minimized since a client may not unnecessarily be restricted access to shared memory when another client is not presently causing image data to be transmitted. In addition, embodiments may provide relatively good power profiles since memory may be intelligently placed in self-refresh for a longer time period.Embodiments are applicable for use with all types of semiconductor integrated circuit ("IC") chips. Examples of these IC chips include but are not limited to processors, controllers, chipset components, programmable logic arrays (PLAs), memory chips, network chips, systems on chip (SoCs), SSD/NAND controller ASICs, and the like. In addition, in some of the drawings, signal conductor lines are represented with lines. Some may be different, to indicate more constituent signal paths, have a number label, to indicate a number of constituent signal paths, and/or have arrows at one or more ends, to indicate primary information flow direction. This, however, should not be construed in a limiting manner. Rather, such added detail may be used in connection with one or more exemplary embodiments to facilitate easier understanding of a circuit. Any represented signal lines, whether or not having additional information, may actually comprise one or more signals that may travel in multiple directions and may be implemented with any suitable type of signal scheme, e.g., digital or analog lines implemented with differential pairs, optical fiber lines, and/or single-ended lines.Example sizes/models/values/ranges may have been given, although embodiments are not limited to the same. As manufacturing techniques (e.g., photolithography) mature over time, it is expected that devices of smaller size could be manufactured. In addition, well known power/ground connections to IC chips and other components may or may not be shown within the figures, for simplicity of illustration and discussion, and so as not to obscure certain aspects of the embodiments. Further, arrangements may be shown in block diagram form in order to avoid obscuring embodiments, and also in view of the fact that specifics with respect to implementation of such block diagram arrangements are highly dependent upon the computing system within which the embodiment is to be implemented, i.e., such specifics should be well within purview of one skilled in the art. Where specific details (e.g., circuits) are set forth in order to describe example embodiments, it should be apparent to one skilled in the art that embodiments can be practiced without, or with variation of, these specific details. The description is thus to be regarded as illustrative instead of limiting.The term "coupled" may be used herein to refer to any type of relationship, direct or indirect, between the components in question, and may apply to electrical, mechanical, fluid, optical, electromagnetic, electromechanical or other connections. In addition, the terms "first", "second", etc. may be used herein only to facilitate discussion, and carry no particular temporal or chronological significance unless otherwise indicated.As used in this application and in the claims, a list of items joined by the term "one or more of or "at least one of may mean any combination of the listed terms. For example, the phrases "one or more of A, B or C" may mean A; B; C; A and B; A and C; B and C; or A, B and C. In addition, a list of items joined by the term "and so on" or "etc." may mean any combination of the listed terms as well any combination with other terms.Those skilled in the art will appreciate from the foregoing description that the broad techniques of the embodiments can be implemented in a variety of forms. Therefore, while the embodiments have been described in connection with particular examples thereof, the true scope of the embodiments should not be so limited since other modifications will become apparent to the skilled practitioner upon a study of the drawings, specification, and following claims. |
A multi-boot device capable of booting from a plurality of boot devices, each storing a boot image. The multi-boot device determines which boot device to load based on sequence numbers assigned to each of the boot devices. Some embodiments will make this determination using only hardware operations. The multi-boot device compares the sequence numbers of the available boot devices in order to determine the boot image to be loaded. The address of the selected boot image is then mapped to the device's default boot vector. The remaining images are likewise mapped to a secondary boot memory. The device then boots from the default boot vector. The user can change the boot device to be loaded by modifying one or more of the boot sequence numbers. The boot images can be updated without resetting the device by switching execution to and from boot images in the secondary boot memory. |
CLAIMSWHAT IS CLAIMED IS:1. A method for booting a multi-boot device having a first boot image and a second boot image, comprising: determining a first boot sequence number associated with the first boot image; determining a second boot sequence number associated with the second boot image; identifying a selected boot image by comparing the first sequence number and the second boot sequence number wherein the comparison determines whether the first boot image or the second boot image is the selected boot image, wherein the selected boot image is identified by device hardware operations; mapping the address of the selected boot image to a primary boot memory location specified by a default boot vector; and booting the device from the default boot vector.2. A method in accordance with claim 1 , wherein determining the first boot sequence number and determining the second boot sequence number comprises, respectively, reading the first boot sequence number from a predetermined location associated with the first boot image and reading the second boot sequence number from a predetermined location associated with the second boot image.3. A method in accordance with claim 1 , the method further comprising: verifying that multi -booting of the device has been enabled.4. A method in accordance with claim 1, wherein the boot image not determined to be the selected boot image is a secondary boot image and the method further comprising: mapping the address of the secondary boot image to a secondary boot memory location wherein the contents of the secondary boot memory location can be updated without affecting device operations executing from the primary boot memory location.5. A method in accordance with claim 4, the method further comprising: processing a command directing the device to swap from the selected boot image to the boot image stored in the secondary boot memory location; and switching execution of the device from the selected boot image to the boot image stored in the secondary boot memory location wherein the switch is made without resetting the device.6. A method in accordance with claim 5, the method further comprising: setting a configuration parameter indicating whether the device is executing from the primary boot memory location or the secondary boot memory location. 7. A method in accordance with claim 3, wherein verification that multi-booting is enabled comprises determining whether the first boot image and the second boot image are both valid boot images.8. A multi-boot device having a first boot image and a second boot image, the device comprising: memory storage for a first boot sequence number associated with the first boot image; memory storage for a second boot sequence number associated with the second boot image; a comparator for identifying a selected boot image by comparing the first sequence number and the second boot sequence number wherein the comparison determines whether the first boot image or the second boot image is the selected boot image wherein the comparator is implemented by device hardware operations; a primary boot image memory storage location, wherein the address of the selected boot image is mapped to the primary boot image memory storage; and memory storage for a default boot vector, wherein the default boot vector is accessed by the device upon being booted or reset and wherein the boot vector specifies the location in memory of the primary boot image.9. A multi-boot device in accordance with claim 8, wherein the first boot sequence number is stored at a predetermined location associate with the first boot image and the second boot sequence number is stored at a predetermined location associated with the second boot image. 10. A multi-boot device in accordance with claim 8, wherein the boot image not determined to be the selected boot image is a secondary boot image the device further comprising: a secondary boot memory storage location for storing the secondary boot image, wherein the contents of the secondary boot memory location can be updated without affecting device operations executing from the primary boot memory location.1 1. A multi-boot device in accordance with claim 10, wherein the device is configured to process a command directing a swap from the selected boot image to the boot image stored in the secondary boot memory location and wherein execution of the device is switched from the selected boot image to the boot image stored in the secondary boot memory location without resetting the device.12. A multi-boot device in accordance with claim 1 1, the device further comprising: memory storage for a configuration parameter wherein the parameter indicates whether the device is executing from the primary boot memory location or the secondary boot memory location.13. A multi-boot device in accordance with claim 8, wherein the comparator is a finite state machine implemented by the device hardware.14. A multi-boot device in accordance with claim 8 wherein the device determines if multi-booting is enabled by detemiining whether the first boot image and the second boot image are both valid boot images.15. A multi-boot system for booting a device from one of a plurality boot images, the system comprising: memory storage for a first boot sequence number associated with the first boot image; memory storage for a second boot sequence number associated with the second boot image; a comparator for identifying a selected boot image by comparing the first sequence number and the second boot sequence number wherein the comparison determines whether the first boot image or the second boot image is the selected boot image wherein the comparator is implemented by system hardware operations; a primary boot image memory storage location, wherein the address of the selected boot image is mapped to the primary boot image memory storage; and memory storage for a default boot vector, wherein the default boot vector is accessed by the system upon being booted or reset and wherein the boot vector specifies the location in memory of the primary boot image.16. A multi-boot system in accordance with claim 15, wherein the first boot sequence number is stored at a predetermined location associate with the first boot image and the second boot sequence number is stored at a predetermined location associated with the second boot image.17. A multi-boot system in accordance with claim 15, wherein the boot image not determined to be the selected boot image is a secondary boot image the system further comprising: a secondary boot memory storage location for storing the secondary boot image, wherein the contents of the secondary boot memory location can be updated without affecting operations executing from the primary boot memory location.18. A multi-boot system in accordance with claim 17, wherein the system processes a command directing a swap from the selected boot image to the boot image stored in the secondary boot memory location and wherein execution is switched from the selected boot image to the boot image stored in the secondary boot memory location without resetting the system.19. A multi-boot system in accordance with claim 18, the device further comprising: memory storage for a configuration parameter wherein the parameter indicates whether execution is from the primary boot memory location or the secondary boot memory location.20. A multi-boot system in accordance with claim 15, wherein the comparator is a finite state machine implemented in the system hardware. |
BOOT SEQUENCING FOR MULTI BOOT DEVICESCROSS-REFERENCE TO RELATED APPLICATIONSThis application claims the benefit of U.S. Provisional Application No. 61/784,833 filed on March 14, 2013, which is incorporated herein in its entirety.TECHNICAL FIELDThe present disclosure relates to microcontrollers and microprocessors and, particularly, boot sequencing for multi-booting embedded microcontroller systems.BACKGROUNDUpon being powered or reset, an embedded microcontroller must select a boot device containing a boot image from which the embedded system will run. The microcontroller may support the use of more than one boot device. Multi-booting refers to the microcontroller selecting between multiple available boot devices. In order to support multi-booting, a mechanism is required that allows the microcontroller to select the desired boot device from the set of available boot devices.Systems typically utilize a default boot vector to identify the boot device containing the boot image that is to be loaded. A default boot vector, which is sometimes referred to as a reset vector, is a designated address space that identifies the location of the boot device from which the system is presently configured to run. Upon a reboot or reset, the CPU accesses the boot vector and is directed to the location of a boot device containing a boot image, which is then loaded. Each boot image will typically include application code for operation of the embedded system and boot loader code for loading the application code. The boot loader initializes the system and loads the application code for execution by the system. In a single boot system, the sole boot image is identified by the boot vector. However, in a multi-boot system, the problem arises as to how to adapt this boot vector mechanism for selecting between multiple available boot devices.One solution for supporting multi-booting is to change the boot vector that is utilized. This can be accomplished by inserting a jump instruction to be executed in conjunction with a reset instruction. The jump instruction directs the system to the location of a selected boot device. Another possibility is to redefine the location of the boot vector prior to the issuance of a reset command such that, upon reset, the system loads the boot image specified by this redefined boot vector. Another possibility is to circumvent the boot vector by placing the boot image at a fixed address and programming the system to boot directly from this address.These conventional approaches require making significant changes to the system's software each time a different boot device is selected. For instance, if the location of the boot vector is changed, boot images must be recompiled in order to utilize the new boot vector. As changes to the boot images are made, the user must ensure that updated boot images are configured according to the remapped boot vector. With each change to the location of the boot vector, the user must propagate these changes to all relevant boot images. If the boot vector is circumvented entirely, not only must each boot image be recompiled to point to the address location of the boot code to be used, the device logic itself must be altered to circumvent the default boot vector and, as before, each boot image would have to be recompiled in order to reset to this new boot code location. It would be desirable to have a configurable multi-booting solution that does not require recompiling boot images each time a change to the selected boot device is made. It would be further desirable to enable the multi-booting selections to be made via a simple hardware configuration.SUMMARY Conventional dual-booting approaches require substantial reconfiguration in order to change the selected boot image and provide no ability to update boot images in a fail-safe manner. Hence, there is a need for configurable dual-booting solution that allows boot determinations to be made in hardware and provides for seamless updating of boot images. These and other drawbacks in the prior art are overcome in large part by a system and method according to embodiments of the present invention.According to an embodiment, a method for booting a multi-boot device having a first boot image and a second boot image, comprising: determining a first boot sequence number associated with the first boot image; determining a second boot sequence number associated with the second boot image; identifying a selected boot image by comparing the first sequence number and the second boot sequence number wherein the comparison determines whether the first boot image or the second boot image is the selected boot image, wherein the selected boot image is identified by device hardware operations; mapping the address of the selected boot image to a primary boot memory location specified by a default boot vector; and booting the device from the default boot vector.Further embodiments may also include: determining the first boot sequence number and determining the second boot sequence number comprises, respectively, reading the first boot sequence number from a predetermined location associated with the first boot image and reading the second boot sequence number from a predetermined location associated with the second boot image. Further embodiments may also include verifying that multi-booting of the device has been enabled. In further embodiments, the boot image not determined to be the selected boot image is a secondary boot image and may also include mapping the address of the secondary boot image to a secondary boot memory location wherein the contents of the secondary boot memory location can be updated without affecting device operations executing from the primary boot memory location. Further embodiments may also include: processing a command directing the device to swap from the selected boot image to the boot image stored in the secondary boot memory location; and switching execution of the device from the selected boot image to the boot image stored in the secondary boot memory location wherein the switch is made without resetting the device. Further embodiments may also include: setting a configuration parameter indicating whether the device is executing from the primary boot memory location or the secondary boot memory location. In further embodiments, the verification that multi-booting is enabled comprises determining whether the first boot image and the second boot image are both valid boot images.BRIEF DESCRIPTION OF THE DRAWINGSThe present invention may be better understood, and its numerous objects, features, and advantages made apparent to those skilled in the art by referencing the accompanying drawings. The use of the same reference symbols in different drawings indicates similar or identical items.FIG. 1 is a block diagram of an exemplary processor according to embodiments.FIG. 2 schematically illustrates operation of embodiments.FIG. 3 is a flowchart illustrating operation of embodiments.FIG. 4 is a flowchart illustrating operation of embodiments.DETAILED DESCRIPTIONThe disclosure and various features and advantageous details thereof are explained more fully with reference to the exemplary, and therefore non-limiting, embodiments illustrated in the accompanying drawings and detailed in the following description. Descriptions of known programming techniques, computer software, hardware, operating platforms and protocols may be omitted so as not to unnecessarily obscure the disclosure in detail. It should be understood, however, that the detailed description and the specific examples, while indicating the preferred embodiments, are given by way of illustration only and not by way of limitation. Various substitutions, modifications, additions and/or rearrangements within the spirit and/or scope of the underlying inventive concept will become apparent to those skilled in the art from this disclosure.As used herein, the terms "comprises," "comprising," "includes," "including," "has," "having," or any other variation thereof, are intended to cover a non-exclusive inclusion. For example, a process, product, article, or apparatus that comprises a list of elements is not necessarily limited only those elements but may include other elements not expressly listed or inherent to such process, process, article, or apparatus. Further, unless expressly stated to the contrary, "or" refers to an inclusive or and not to an exclusive or. For example, a condition A or B is satisfied by any one of the following: A is true (or present) and B is false (or not present), A is false (or not present) and B is true (or present), and both A and B are true (or present). Additionally, any examples or illustrations given herein are not to be regarded in any way as restrictions on, limits to, or express definitions of, any term or terms with which they are utilized. Instead these examples or illustrations are to be regarded as being described with respect to one particular embodiment and as illustrative only. Those of ordinary skill in the art will appreciate that any term or terms with which these examples or illustrations are utilized encompass other embodiments as well as implementations and adaptations thereof which may or may not be given therewith or elsewhere in the specification and all such embodiments are intended to be included within the scope of that term or terms. Language designating such non-limiting examples and illustrations includes, but is not limited to: "for example," "for instance," "e.g.," "in one embodiment," and the like.As discussed above, an embedded system typically runs from a boot image where the location of the boot image in memory is identified by the default boot vector. The default boot vector specifies the boot image that is to be loaded whenever the system is booted or reset. This default boot vector may be located at any predetermined location in program memory as long as this location is uniformly known to the CPU and the boot images as the address of the first boot instruction to be executed upon a reboot or a rest. Upon the system being booted or reset, the address specified by the default boot vector identifies the boot image to be loaded. However, unlike conventional systems, embodiments provide for the ability to change the boot image to be loaded via a simple configuration process that can be executed in hardware.As described above, changing the location of the default boot vector is possible but requires recompiling each of the boot images to point to this new boot vector location. Instead, embodiments provide the ability to change the boot image that is to be used without having to recompile the boot images or alter the logic by which selected boot image is determined by the system. In other words, embodiments utilize a conventional default boot vector that is the first instruction executed upon any reboot or reset of the system. Embodiments provide the ability to switch boot images by re-ordering the available boot devices. Thus, a different boot device can be designated as the selected boot devices by reordering the boot devices in the sequence. According to embodiments, a boot device may any memory device that can be assigned to the boot vector, such as internal or external volatile memory (such as RAM), or internal or external non-volatile memory such as Flash memory, EEPROM or an SD Card. According to embodiments, boot devices are assigned sequence numbers that specify the rank of each boot device containing a boot image among the available set of boot devices. Upon a reboot or reset, the system may use a state machine implemented in hardware to determine the relative ordering of the available boot devices based on the sequence numbers of the boot devices. Once the selected boot device has been identified, the address of the selected boot device is mapped to the default boot vector without altering the boot image. This allows the CPU to boot conventionally from the default boot vector which is redirected to the selected boot image. In this fashion, any number of boot images can be made available and selected using configurable sequence numbers where the process for selecting the desired boot image can be implemented in hardware and thus executed without the CPU having to load and execute any software.According to some embodiments, the sequence number provided to each boot image signifies the relative order of the boot images. In one embodiment, the boot image assigned the lowest sequence number is the selected boot image, with the boot image with the next lowest sequence number being the first alternate selected boot image. In another embodiment, the boot image that is provided the highest sequence number is the selected boot image.In some embodiments, the sequence number for each boot device is stored in a predetermined location of that boot image such that each boot device provides a standardized mechanisms for identifying the sequence number of the boot device. In this manner, the user is able to assign a sequence number to each boot device such that the sequence number is stored as part of the boot image. Each available boot device can then be queried in order to ascertain its assigned sequence number. In another embodiment, the boot device sequence numbers are stored a data structure present in memory. This data structure can be queried in order to determine the available boot devices and the sequence number for each boot device. In another embodiment, sequence numbers are stored external to the boot devices such that each sequence number is stored in a memory location that is associated with a boot device.After the reset, the sequence numbers are utilized to identify the selected boot device. In some embodiments, a state machine is utilized to read the sequence numbers for all available boot devices and to determine the boot device with the lowest (or highest) sequence number. In this manner, the state machine identifies the selected boot device. According to embodiments, once the relative ordering of the sequence numbers has been used to identify the selected boot device, the address of the selected boot device is mapped to the boot memory specified by the default boot vector. The selected boot image will thus become the default boot image that is loaded on a subsequent reboots or resets and will remain the default boot image until the sequence numbers of the available boot images are altered to indicate that a different boot image has been selected. Also according to embodiments, the address of the boot image with the second ranked sequence number is mapped to a secondary boot memory location. As described below, embodiments provide the ability for users to trigger swapping execution between the boot memory and the secondary boot memory. This provides the opportunity for the device to switch execution to the secondary boot memory while updating the boot image in the boot memory. This allows failsafe updating of boot images and also allows updates to be made without resetting the device.The CPU then proceeds to execute from the selected boot device. According to some embodiments, sequence numbers of boot devices may be changed via software instructions within the application code where these changes either increment or decrement the sequence number of one or more boot devices. After a boot or reset of the device, the updated sequence numbers are used in determining the relative ordering of the boot devices. In this manner, the user can change the selected boot image without having to recompile any software or alter the booting logic of the device. Consequently, the selected boot image can be determined in hardware by ordering the valid sequence numbers of available boot images.Each boot memory location is a region in memory that contains a boot image, which is a set of booting instructions and application code from which the CPU can run. However, a region in memory corresponding to a boot memory location may also be blank. This allows the ability to reserve a region in memory, but has the effect of establishing an invalid boot image. The device can then update these invalid boot images while executing from a valid boot image.In another embodiment explained in further detail below, a user may "hot swap" from one boot image to another without having to reset the device. For example, while executing the application code loaded by a first boot image, the user can trigger an immediate swap to the execution of a second boot image. This provides the ability to swap from the previously selected boot image loaded in boot memory with the boot image loaded in the secondary boot memory, as long as the trigger remains in effect. In some embodiments, this temporary boot image swap can be adopted for use in subsequent resets by adjusting the boot sequence numbers of the boot images. Upon the next reset, the boot images will be remapped accordingly.Turning now Figure 1, a block diagram of an exemplary microcontroller 100 that implements a multi-boot system in accordance with embodiments is shown. It is noted that other configurations of the microcontroller are possible. Thus, Figure 1 is an exemplary embodiment. The microcontroller 100 includes a bus 101 and a central processing unit (CPU) core 102 coupled to the bus 101. The CPU core 102 may include one or more register arrays 150, an arithmetic logic unit 154, an instruction decode module 156 and a program counter 158. Upon being booted or reset, the CPU core 102 is configured to read its first instruction from the default boot vector 115 stored in program memory.In the embodiment illustrated in Figure 1 , data memory 108 communicates with the CPU core 102 via the bus 101. The bus 101 also provides the CPU core 102 with access to microcontroller services such as interrupt controllers 1 10 and clock module 11 1. The CPU core 102 can also access one or more peripheral components 1 14 via the bus 101. These peripheral components may be implemented entirely by the microcontroller or may be implemented in some part external to the microcontroller. The peripherals 1 14 available to the microcontroller may implement, for example, timing support, I/O (input/output) interfaces, PWM (pulse width modulation) and USB functionality. The CPU core 101 also communicates with program memory 104. In some embodiments one or more boot images 1 16, 118 are stored in program memory 104. Other embodiments may allow for boot images to be stored in a separate, dedicated memory. The boot images are comprised of a first boot image 1 16 and one or more second boot images 1 18. As described below, embodiments allow the microcontroller 100 to selectively boot based on either the first boot images 116 or one of the second boot images 118. As described above, some embodiments may refer to boot memory locations, regions or devices rather than boot images. Thus, in some embodiments, first boot image 1 16 and second boot image 118 may be represented as regions in memory that are boot memory locations with no guarantee that the region contains a valid boot image. According to the embodiment of Figure 1, a default boot vector 1 15 is located in program memory 104. The CPU core 102 is configured to read its first instruction from the default boot vector 1 15. Some embodiments may allow for the boot vector 1 15 to be located in other non- volatile or volatile memory. The default boot vector 1 15 may be located anywhere in the program memory as long as the CPU core 102 has been configured to access the default boot vector 1 15 upon a boot or reset. The default boot vector 1 15 specifies the location in memory of the boot image to be loaded, which is referred to as the boot memory 130. Thus, upon a boot or reset the CPU core 102 accesses the default boot vector 1 15 and is directed to the address of the boot memory 130 which has been mapped to the address of the selected boot device which stores the selected boot image.The device may also retain other boot images in program memory via a secondary boot memory 135. Based on the ordering of the boot images by sequence numbers, the selected boot image is identified and its address is mapped to the boot memory 130. The second ranked boot image is the first alternate boot image and its address mapped to secondary boot memory 135. The user can then trigger switching execution from the boot memory 130 and the secondary boot memory 135 as needed in order to update these boot images in a fail-safe manner.In some embodiments, the boot images 1 16, 1 18 include predetermined locations for storing boot sequence numbers. As illustrated in Figure 1 , the first boot image is comprised of a boot sequence number 128 which is used to determine the relative ordering of the first boot image versus other available boot images that have been assigned a boot sequence number. In this same manner, the second boot image is also comprised of a boot sequence number 126 that specifies the relative order of the second boot image within the set of available boot images.In order to facilitate determination of the relative ordering of the available boot images, the boot sequence number of each boot image is determined in a standardized manner. In some embodiments, the boot sequence number will be stored in a predetermined location within the boot image. For example, the boot sequence number of each boot image may be located at a memory address that is located at a fixed offset from the first instruction of the boot image. In some embodiments, the memory address at which the boot sequence number is stored within a boot image may be available as a variable that can be queried at a predetermined location within the boot image. In some embodiments, the boot sequence numbers that have been assigned are stored in a data structure. In this scenario, the relative ordering of the available boot images can be ascertained by querying this data structure to obtain the sequence number for each boot image. In some embodiments, the boot sequence numbers will be stored in non- volatile memory external to the boot image.Every available boot image can be assigned a boot sequence number. However, embodiments may only assign a subset of the available boot images a sequence number. In such scenarios, boot images with no sequence numbers would be ordered behind all boot images with an assigned sequence number. In some embodiments, some of the available boot devices may be invalid and thus cannot be ordered in this fashion. For these invalid boot devices, querying their sequence number returns an error. These invalid boot images would not be mapped to any boot memory and thus do not exist as booting options with multi-boot device.Figure 1 also illustrates a boot panel comprised of special-purpose configuration registers that are used to direct the booting process in some embodiments. One such register is a dual boot control register 122 that can be used to configure whether the microcontroller 100 is to execute dual booting processes. As explained in more detail below, if the value of the dual boot register 122 does not enable dual booting, the device will proceed to boot the microcontroller as a single boot device. The configuration registers may also comprise a boot swap register 124 that is used to identify the boot image that is presently loaded and to command the device to conduct a hot swap to a specified boot image, for example the boot image in the secondary boot memory. Additional configuration registers may be present that further direct the boot sequencing process.Figure 2 schematically illustrates a process for utilizing boot sequence numbers in accordance with embodiments. Upon being booted or reset, the device reads the sequence numbers for the two available boot images 1 16, 1 18. At step 201, the boot sequence number 126 for the first boot image 1 16 is determined. At step 202, the boot sequence number 128 for the second boot image is determined. At step 205, the device then compares the first boot image sequence number 128 and the second boot image sequence number 126. Based on this comparison, the selected boot image 210 and the first alternate boot image 215 are identified. At step 220, the address of the boot device storing the selected boot image 210 is mapped to the boot memory location 130. At step 230, the address of the first alternate boot image 215 is mapped to the secondary boot memory location 135. In some embodiments, the address of the remaining valid boot images are mapped, in order, to the secondary boot memory. At step, 235, the CPU core then boots conventionally, by accessing the default boot vector and being redirected to the address of the boot device storing the selected boot image. The CPU core proceeds to boot from the selected boot image.The selected boot image can be identified, according to some embodiments, utilizing a finite state machine 120. Using a finite state machine for identifying the selected boot image facilitates embodiments that implement configurable dual booting in hardware. The finite state machine 120 is used to make pair-wise comparisons of boot sequence numbers in order to determine their relative ordering. In Figure 2, a finite state machine 120 is used to determine the relative ordering of the boot sequence numbers of a first boot image 1 16 and a second boot image 1 18. At steps 201 and 202, the boot sequence numbers for each of the boot images is determined. As described above, according to some embodiments, the boot sequence number of each boot image may be stored at a predetennined address location of the boot image. Other embodiments may allow for the boot sequence numbers to be stored in other fixed locations or data structures in memory. At step 205, the finite state machine then compares the boot sequence number of the first boot image 128 and the boot sequence number of the second boot image 126.In the example of Figure 2, the relative ordering of only two boot images is illustrated. It is noted, however, that the finite state machine 120 can be used to determine the relative ordering of any number of boot images. The finite state machine 120 compares the boot sequence numbers of two boot images at one time. However, according to algorithms known in the art, a series of pair-wise comparisons can be made in order to determine the relative ordering of the boot sequence numbers for any number of boot images.In the embodiment of Figure 3, the device makes a preliminary determination whether to proceed with dual-booting based on whether dual booting has been requested or whether multiple boot images are available for booting. At step 302, the device undergoes a boot or a reset. At step 304, reads the configuration data necessary to determine whether to proceed with dual booting. In one embodiment, the boot configuration panel is accessed in order to read configuration bits that encode dual booting instructions. In some embodiments, these configuration bits will be located in dual boot register 122. Other embodiments may instead store these configuration bits in fixed locations in program memory. In some embodiments, dual booting is determined based on whether multiple valid boot images can be identified. Thus, in step 304, the CPU core accesses the boot sequence numbers for every available boot image. At step 306, the device determines whether dual booting has been enabled. If configuration bits are used, embodiments will determine whether these bits enable dual booting. If dual booting is based on valid boot sequence numbers, embodiments will evaluate the boot sequence numbers that have been identified to determine if two or more valid boot images are available. If only one valid boot image is identified or the configuration bits specify that dual booting is not enabled, the device boots in a single boot configuration at step 315 by booting from the default boot vector. If no valid boot image is identified, the CPU core boots from the boot image specified by the default boot vector, which will be the boot image that was last known to be valid. If step 306 determines that dual boot is enabled, then the selected boot image is determined as described with respect to the embodiment of Figure 2. At step 310, the boot sequence numbers of the first boot image 116 and the second boot image 1 18 are determined. Based on the boot sequence number comparison made at step 312, either the first boot image 1 16 or the second boot image 1 18 is determined to be the selected boot image. If the boot sequence number of the first boot image 1 16 is the lowest boot sequence number or equal to the lowest sequence number, at step 320, the address of the first boot image is mapped to the boot memory and, at step, 325, the address of the second boot image is mapped to secondary boot memory. Conversely, if the boot sequence number of the second boot image 1 18 is the lowest boot sequence number, at step 330, the address of the second boot image is mapped to the boot memory and, at step, 335, the address of the first boot image is mapped to secondary boot memory.The process by which a boot image hot swap is conducted is shown in Figure 4. At a process step 402, the device is executing according to a first boot image 116. At this point, the boot swap register 124 signals that a hot swap is not presently in effect. At step 410, the boot swap register 124 is used to determine whether a boot swap is to be undertaken. In some embodiments, the boot swap register 124 may encode a value directing that a boot swap be undertaken. For example, a zero entry in the boot swap register may indicate that the current boot image should be maintained and a one entry may indicate that the current boot image should be swapped. Another possibility is for the boot swap register 124 to encode the device number of the boot device that is presently loaded. As long as the boot swap register 124 contains the device number corresponding to the currently executing boot image, no swap is made. If the boot swap register 124 is changed to point to a different device number, this signals that a boot swap should be undertaken. Yet another possibility, is for boot swaps to be specified in software via a bootswap instruction. This embodiment is the focus of the remaining elements of Figure 4. If boot swap register 124 indicates that no boot swap is presently requested, the device continues to monitor the boot swap register until any such indication is identified.In process step 404, it is determined that the user has directed the device to enact a hot swap of boot images. In the embodiment of Figure 4, the user will trigger a hot swap of boot images by issuing a BOOTSWP instruction that is followed by a GOTO <target> instruction where the <target> specifies the boot image to be hot swapped. For instance, the <target> would specific the second boot image 1 18 as the boot image to be swapped in place of the active boot image. At step 404, the CPU core 102 executes the BOOTSWP instruction and interrupts the execution of the current active first boot image 116.At process step 406, the CPU core 102 executes the GOTO instruction and jumps directly to the <target> boot image, for example the second boot image 1 18. Upon executing this jump, the CPU core 102 begins executing the second boot image 1 18. This results in a hot swap from the first boot image 1 16 to the second boot image 1 18 with no reset of the device and no changes to the exiting device configuration. At process step 408, the configuration data is updated to reflect that active boot image has been hot swapped. For instance, boot swap register 124 would be updated to indicate that no hot swap is presently requested or specify the address of the boot image that is presently executing.A hot swap of boot images, by itself, does not alter the boot image that is mapped to boot memory and thus does not change the boot image that will be loaded upon a reset of the device. Thus, even though the second boot image 118 may have been hot swapped in place of the first boot image 116, if the first boot image 1 16 is still mapped to boot memory and the device will load the first boot image 116 upon a subsequent boot or rest. Unless modifications are made to the boot sequence numbers of the boot images, a hot swap will constitute a temporary swap that lasts only until the device is reset. A hot swap of boot images can be made into permanent swap via the previously described process of updating boot sequence numbers. For example, if the user has hot swapped to the second boot image 1 18, the swap can be made permanent by re-assigning boot sequence numbers of the available boot images such that the second boot image 1 18 has the lowest boot sequence number which results in the second boot image 1 18 being mapped to the default boot vector 1 15.One advantage provided by embodiments is fail-safe mechanism for updating boot images. If the presently active boot image that is executing receives an update, embodiments provide a fail-safe method for making this update. For instance, the updated boot image can be stored to the secondary boot memory. One the updated boot image has been verified, the sequence numbers can be updated for the executing boot image and the updated boot image, as described above, such that the updated boot image now has the lowest/highest sequence number such that it will be determined to be the selected boot image on the next boot or reset. If a power failure were to occur at any time during the update such that the updated boot image is corrupted, the device can continue operating from the currently executing boot image. If the power failure occurs after the updated boot image is verified but during the sequence number update, either the updated sequence number is valid such that the new boot image is correctly identified as the selected boot image or the sequence number is invalid such that the currently executing boot code is loaded based on it still being located in boot memory and identified by the default boot vector.Although the foregoing specification describes specific embodiments, numerous changes in the details of the embodiments disclosed herein and additional embodiments will be apparent to, and may be made by, persons of ordinary skill in the art having reference to this description. In this context, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of this disclosure. Accordingly, the scope of the present disclosure should be determined by the following claims and their legal equivalents. |
An approach for management of memory in a programmable integrated circuit (IC) (100) includes configuring (602) a memory map (400) of the programmable IC with an association of a first subset of addresses of memory address space of the programmable IC and physical memory of the programmable IC. The memory map is further configured (602) with an association of a second subset of addresses of the memory address space and a virtual memory block (1 12). At least a portion of a cache memory of the programmable IC is locked (608, 612, 616) to the second subset of addresses. |
CLAIMSWhat is claimed is:1 . A method of managing memory in a programmable integrated circuit (IC), comprising:5 configuring a memory map of the programmable IC with an association of a first subset of addresses of memory address space of the programmable IC to physical memory of the programmable IC;configuring the memory map with an association of a second subset of addresses of the memory address space to a virtual memory block; and o locking at least a portion of a cache memory of the programmable IC to the second subset of addresses.2. The method of claim 1 , further comprising implementing the virtual memory block as a circuit on the programmable IC, wherein the circuit that5 implements the virtual memory block returns a constant value in response to any input address in the second subset of addresses.3. The method of claim 1 or claim 2, further comprising:accessing the locked portion of the cache memory in response to a 0 memory access request that references an address in the second subset of addresses; andbypassing updating of the physical memory for updates to the locked portion of the cache memory. 5 4. The method of any of claims 1 -3, wherein the locking includes locking one or more ways of a plurality of ways of the cache memory.5. The method of any of claims 1 -4, wherein the cache memory is a second- level cache.06. The method of any of claims 1 -5, wherein the programmable IC has a processor subsystem and a programmable logic subsystem, a second-level cache in the processor subsystem, a portion of the address space assigned to physical memory in the programmable logic subsystem, and the second subset of addresses associated with a subset of the portion of the address space assigned to physical memory in the programmable logic subsystem.7. The method of claim 6, further comprising implementing the virtual5 memory block as a circuit in the programmable logic subsystem, wherein the circuit that implements the virtual memory block returns a constant value in response to any input address in the second subset of addresses.8. The method of any of claims 1 -7, further comprising storing a first value in0 storage elements associated with the addresses of the first subset, and storing a second value in storage elements associated with the addresses of the second subset, wherein the first value indicates that the addresses of the first subset are non-cacheable, and the second value indicates that the addresses of the second subset are cacheable.59. The method of any of claims 1 -8, wherein the locking includes:storing one or more addresses of the second subset in storage elements associated with one or more ways of the cache memory;storing a first value in one or more storage elements associated with the o one or more ways of the cache memory, wherein the first value in the one or more storage elements indicates that the associated one or more ways are locked to the one or more addresses of the second subset.10. The method of claim 1 , wherein the cache memory is a multi-way set5 associative cache, the method further comprising:implementing the virtual memory block as a circuit on the programmable IC, wherein the circuit that implements the virtual memory block returns a constant value in response to any input address in the second subset of addresses;0 selecting one way of the cache memory;issuing read requests to the virtual memory block at addresses of the second subset of addresses and corresponding to the one way; storing the constant value returned from the circuit that implements the virtual memory block in memory of the selected one way of the cache memory; andrepeating the selecting, issuing, and storing for one or more other ways of the cache memory.1 1 . A programmable integrated circuit (IC), comprising:a processor subsystem including memory circuitry that implements a first portion of memory address space of the programmable IC;a programmable logic subsystem including programmable logic circuitry and memory circuitry that implements a second portion of the memory address space;a cache circuit coupled to the memory circuitry of the processor subsystem and to the memory circuitry of the programmable logic subsystem; anda virtual memory block circuit implemented in the programmable logic circuitry, wherein the virtual memory block circuit is responsive to addresses of a subset of the second portion of the memory address space;wherein:the cache circuit includes lock storage elements and tag storage elements associated with storage blocks of the cache circuit,a plurality of the tag storage elements are configured with addresses of the subset of the second portion of the memory address space, andone or more of the lock storage elements are configured with a first value that indicates that one or more associated storage blocks are locked to the addresses of the subset of the second portion of the memory address space in the plurality of tag storage elements. 12. The programmable IC of claim 1 1 , wherein:the processor subsystem includes flag storage elements associated with the addresses of memory address space; a first subset of the flag storage elements are configured with a first value that indicates that caching of data from the first portion of the memory address space is disabled; anda second subset of the flag storage elements are configured with a second value that indicates that caching of data from the second portion of memory address space is enabled.13. The programmable IC of claim 1 1 or claim 12, wherein the virtual memory block circuit is configured to return a constant value in response to any input address in the subset of the second portion of the memory address space.14. The programmable IC of any of claims 1 1 -13, wherein the cache circuit is configured and arranged to:access a storage block locked to an address of the subset of the second portion of the memory address space in response to a memory access request that references the address in the subset of the second portion of the memory address space; andbypass updating of the physical memory for an update to the storage block locked to the address of the subset of the second portion of the memory address space.15. The programmable IC of any of claims 1 1 -14, wherein the processor subsystem includes a first-level cache coupled to the cache circuit, and the cache circuit is a second-level cache. |
MANAGEMENT OF MEMORY RESOURCES IN A PROGRAMMABLEINTEGRATED CI RCUITFIELD OF THE INVENTIONThe disclosure generally relates to managing memory resources in a programmable integrated circuit (IC).BACKGROUNDProgrammable integrated circuits (ICs) with different capabilities are widely available. Generally, programmable ICs are devices that can be programmed to perform specified logic functions. A programmable IC may include programmable logic or a combination of programmable logic and hardwired logic, such as one or more microprocessors. One type ofprogrammable IC, the field programmable gate array (FPGA), typically includes an array of programmable tiles. These programmable tiles comprise various types of logic blocks, which can include, for example, input/output blocks (lOBs), configurable logic blocks (CLBs), dedicated random access memory blocks (BRAM), multipliers, digital signal processing blocks (DSPs), processors, clock managers, delay lock loops (DLLs), bus or network interfaces such as Peripheral Component interconnect Express (PCIe) and Ethernet and so forth.Each programmable tile may include both programmable interconnect and programmable logic. The programmable interconnect typically includes a large number of interconnect lines of varying lengths interconnected by programmable interconnect points (PIPs). The programmable logic implements the logic of a user design using programmable elements that can include, for example, function generators, registers, arithmetic logic, and so forth.The programmable interconnect and programmable logic are typically programmed by loading a configuration data stream into internal configuration memory cells that define how the programmable elements are configured. The configuration data can be read from memory (e.g., from an external PROM) or written into the FPGA by an external device. The collective states of the individual memory cells then determine the function of the FPGA.Some programmable ICs include one or more microprocessors that are capable of executing program code. The microprocessor can be fabricated as part of the same die that includes the programmable logic circuitry and the programmable interconnect circuitry, also referred to collectively as the"programmable circuitry" of the IC. It should be appreciated that execution of program code within a microprocessor is distinguishable from "programming" or "configuring" the programmable circuitry that may be available on an IC. The act of programming or configuring programmable circuitry of an IC results in the implementation of different physical circuitry as specified by the configuration data within the programmable circuitry.A system on chip (SOC) is an example of a programmable IC. An SOC may include a micro-processor, programmable logic, on-chip memory, various input/output (I/O) circuitry, and interconnect circuits for communicating between the micro-processor, programmable logic, and I/O circuitry.Although the integration of multiple functions on a single SOC may support a wide variety of applications and provide great flexibility, the quantity of resources providing particular functional circuitry on the SOC may be less than the quantity of resources available if that particular functional circuitry were implemented on a separate IC die. For example, an SOC may have fewer programmable logic resources than a dedicated FPGA IC die. Similarly, an SOC having one or more microprocessors, on-chip memory, and programmable logic, may have fewer on-chip memory resources than another SOC havingmicroprocessors, on-chip memory, and no programmable logic. Some applications may benefit from a greater quantity of on-chip memory than a particular SOC has available. To accommodate a need for more on-chip memory, a designer may look for an SOC having greater on-chip memory resources. However, an SOC having more on-chip memory may be more expensive than another SOC having less on-chip memory, leaving the designer to choose between less performance at a reduced cost or greater performance at a greater cost. SUMMARYA method of managing memory in a programmable integrated circuit (IC), is disclosed. The method includes configuring a memory map of theprogrammable IC with an association of a first subset of addresses of memory address space of the programmable IC to physical memory of the programmable IC. The memory map is configured with an association of a second subset of addresses of the memory address space to a virtual memory block. At least a portion of a cache memory of the programmable IC is locked to the second subset of addresses.A programmable IC is also disclosed. The programmable IC includes a processor subsystem, and the processor subsystem includes memory circuitry that implements a first portion of memory address space of the programmable IC. The programmable IC further includes a programmable logic subsystem, and the programmable logic subsystem includes programmable logic circuitry and memory circuitry that implement a second portion of the memory address space. A cache circuit is coupled to the memory circuitry of the processor subsystem and to the memory circuitry of the programmable logic subsystem. A virtual memory block circuit is implemented in the programmable logic circuitry. The virtual memory block circuit is responsive to addresses of a subset of the second portion of the memory address space. The cache circuit includes lock storage elements and tag storage elements associated with storage blocks of the cache circuit. A plurality of the tag storage elements are configured with addresses of the subset of the second portion of the memory address space. One or more of the lock storage elements are configured with a first value that indicates that one or more of the associated storage blocks are locked to the addresses of the subset of the second portion of the memory address space in the plurality of tag storage elements.Other features will be recognized from consideration of the Detailed Description and Claims, which follow.BRIEF DESCRIPTION OF THE DRAWINGSVarious aspects and features of the disclosed methods and circuits will become apparent upon review of the following detailed description and upon reference to the drawings in which:FIG. 1 shows a programmable IC, which is an example of a system on chip (SOC);FIG. 2 shows an example of an implementation of a virtual memory block in a programmable logic subsystem; FIG. 3 shows a system memory map that may be applicable to the system of FIG. 1 ;FIG. 4 illustrates a translation table that maps physical addresses to attributes of those physical addresses;FIG. 5 shows multiple blocks of memory and a cache memory having locked addresses for a virtual memory block; andFIG. 6 shows a process of configuring a programmable IC to implement a virtual memory block and locking addresses of the virtual memory block in a cache.DETAILED DESCRIPTION OF THE DRAWINGSIn the following description, numerous specific details are set forth to describe specific examples presented herein. It should be apparent, however, to one skilled in the art, that one or more other examples and/or variations of these examples may be practiced without all the specific details given below. In other instances, well known features have not been described in detail so as not to obscure the description of the examples herein. For ease of illustration, the same reference numerals may be used in different diagrams to refer to the same elements or additional instances of the same element.In the disclosed methods and systems, the quantity of on-chip memory of a programmable IC, such as an SOC, is increased by implementing a virtual memory block and dedicating cache storage of the programmable IC to the part of the address space of the programmable IC assigned to the virtual memory block.The programmable IC includes a memory map that describes the address space of the programmable IC. The memory map is configured such that part of the address space is associated with physical memory resources that provide data storage on the programmable IC. A virtual memory block is associated with another part of the address space. The virtual memory block is assigned a subset of addresses of the address space, and there is no main memory circuitry for storage of data at those addresses. Instead of the virtual memory block having main memory circuitry, a cache is locked with the addresses associated with the virtual memory block. Caching of data at addresses outside the virtual memory block addresses may be disabled if the entire cache is used for the virtual block. When the addresses of the virtual memory block are locked in the cache, references to the addresses of the virtual memory block are resolved to the cache. Any updates to data at the addresses of the virtual memory block remain in the cache and are not further written to any main memory storage, because the storage for the addresses of the virtual memory block is provided exclusively by the cache.FIG. 1 shows a programmable IC 100, which is an example of an SOC. The programmable IC may be configured to effectively increase the quantity of on-chip memory on the chip. The programmable IC includes a processor subsystem 102 and a programmable logic subsystem 104. The processor subsystem generally includes one or more processor cores 106, memory resources, and circuitry for connecting to the programmable logic subsystem. The programmable logic subsystem may include programmable logic (not shown), programmable interconnect (not shown) and various other circuitry such as that described above for an FPGA.The programmable IC includes a number of memory resources that are accessible to program code executing on the processor core 106 or to a circuit implemented in the programmable logic subsystem. The memory resources include on-chip memory 108 and memory blocks 122 that may be configured in the programmable logic subsystem. Double data rate (DDR) memory resources 1 10 may be disposed off-chip and provide additional storage for program code executing on the processor core or for circuits implemented in the programmable logic subsystem. The on-chip memory 108 and DDR memory 1 10 may be implemented with DRAM in an example implementation.The physical address space of the programmable IC is mapped in memory map 1 18. The physical address space encompasses the on-chip memory 108 and DDR memory 1 10, and in addition, address space available in the programmable logic subsystem, such as for I/O peripherals 120, memory blocks 122, and the virtual memory block 1 12. The memory map may be a lookup table memory in which ranges of addresses are mapped to the components assigned to the address ranges. For example, one address range is mapped to on-chip memory 108, another address range is mapped to DDR memory 1 10, another address range is mapped to I/O peripherals 120, and another address range is mapped to the programmable logic in which memory blocks 122 and virtual memory block 1 12 may be implemented. Theinterconnect circuit 124 uses the memory map to direct a memory access request to the correct component.For some applications, there may be security requirements that exclude some uses of DDR memory, a need for additional memory in the processor subsystem, or improved determinism in memory access times. Theprogrammable IC may be configured to implement a virtual memory block 1 12 in the programmable logic subsystem and to lock addresses of the virtual memory block to the second-level cache 1 14 in the processor subsystem 102 in order to provide additional memory resources to the processor subsystem. Caching may be disabled for addresses outside the virtual memory block if the entire cache is dedicated to the virtual memory block. Program code executing on the processor core 106 can access on-chip memory 108 and second-level cache 1 14 faster than accessing memory implemented in the programmable logic subsystem or DDR memory 1 10. Although the programmable logic subsystem may be configured to implement memory resources for code executing in the processor subsystem, an access request from the processor core to memory implemented in the programmable logic subsystem may pass through a switching network, as exemplified by line 1 16, and incur substantial delay.In one implementation, the virtual memory block 1 12 is implemented in programmable logic of the programmable logic subsystem 104. The virtual memory block 1 12 differs from memory blocks 122 in that there is no data storage provided by the virtual memory block in the programmable logic subsystem. That is, program code executing on processor core 106 may write data in memory circuitry in the programmable logic subsystem that implements memory blocks 122, but data written to the addresses of the virtual memory block 1 12 is not stored in any memory circuitry of the programmable logic subsystem. Rather, data written to the addresses of the virtual memory block is stored in the second-level cache 1 14.Instead of having memory circuits in the programmable logic subsystem for storage of data for the addresses of the virtual memory block, the addresses of the virtual memory block are locked in the second-level cache 1 14. Accesses by code executing on the processor core(s) 106 to addresses mapped to the virtual memory block will be directed to the second-level cache. Thus, code executing on the processor core has both on-chip memory 108 and the second- level cache available as memory resources accessible within the processor subsystem. All updates to data at the addresses of the virtual memory block remain in the second-level cache and updating of main memory is bypassed, because the storage for the addresses of the virtual memory block is provided exclusively by the cache. Likewise, accesses to addresses of the virtual memory block by code executing on a "soft processor" (not shown), which is a processor implemented in programmable logic of the programmable logic subsystem, will be directed to the second-level cache.The processor subsystem further includes a memory management unit(MMU) and first-level cache circuit 126, a translation table 128, and a snoop control unit 130. The MMU receives memory access requests from the processor core(s) 106 and uses the translation table to translate virtual addresses into physical addresses. The translation table maps a virtual address space to physical addresses of the physical memory resources of the SOC.In an example implementation, the translation table includes flag storage elements associated with the physical addresses. The state of each flag storage element indicates whether or not the associated address range is cacheable or non-cacheable. For addresses of the virtual memory block, the associated flag storage elements in the translation table may be set to a value that indicates that caching of data at those addresses is enabled. For addresses outside the virtual memory block, the associated flag storage elements may be set to a value that indicates that caching of data at those addresses is disabled if the entire cache is dedicated to the virtual memory block.The MMU 126 determines whether or not the address in a memory request from the processor core(s) is present in the first-level cache. For addresses present in the first-level cache, the MMU accesses the first-level cache. For addresses not present in the first-level cache, the MMU passes the requests to the snoop control unit 130.The snoop control unit 130 processes memory access requests forwarded from the MMU and memory access requests transmitted by processor circuits (not shown) implemented in the programmable logic subsystem 104 over interface 132. In addition to maintaining coherency between second-level cache 1 14 and other caches (not shown) in the programmable logic subsystem, the snoop control unit determines whether or not a requested address is cached in the second-level cache. As addresses of the virtual memory block 1 12 are locked in the second-level cache, the second-level cache is accessed for requests referencing the virtual memory block. For requests referencing5 addresses not present in the second-level cache, the snoop control unit forwards the request to the interconnect circuit 124, which in turn determines the component address from the memory map 1 18 and forwards the request accordingly.The use of the virtual memory block and locking of addresses of the10 virtual memory block may be adapted to a variety of different programmable IC architectures. For example, in one implementation, the second-level cache is an 8-way, set associative cache. It will be recognized that the cache in alternative implementations may have fewer or more ways, may be direct mapped, or may be fully associative. The processor subsystem 102 has first-level and second- i s level caches. An alternative implementation may have a single level cache in which the addresses of the virtual memory block are locked in the single level cache.FIG. 2 shows an example of an implementation of a virtual memory block 1 12 in a programmable logic subsystem 104. The virtual memory block may be20 implemented as a circuit that outputs bit values of 0 for every address input to the circuit. The virtual memory block need not have any memory circuits for storing data associated with the addresses of the address space assigned to the virtual memory block, as the same constant value is output for each assigned address, as illustrated by line 204. The programmable logic subsystem may25 further include a memory block controller 202 for providing an interface between the virtual memory block and a microcontroller bus that connects theprogrammable logic subsystem to the processor subsystem.FIG. 3 shows a system memory map 300 that may be applicable to the system of FIG. 1 . The memory map may be a lookup table memory that is30 addressable by addresses, or portions of the addresses, in the physical address space of the system. The address space may be divided into address ranges, with each address range mapped to a component or set of components addressed by the addresses in the associated address range. The example system memory map includes address ranges mapped to on-chip memory, CPU private registers, processor system registers, I/O peripherals, programmable logic, and DDR memory. A portion of the address space that is mapped to programmable logic is allocated to a virtual memory block, as illustrated by block 302, which is a portion of the address range 304. The dashed line signifies that there is no physical memory circuitry in the programmable logic for storage of data at addressees of the virtual memory block. Other portions of the address range 304 may be assigned to memory blocks that have physical memory circuitry for storage of data at addresses assigned to the memory blocks.FIG. 4 illustrates a translation table 400 that maps physical addresses of memory address space to attributes of those physical addresses. The translation table is a memory map and may be used by an operating system executing on the processor core(s) 106 of FIG. 1 , for example. The translation table may be a lookup table memory that is addressable by physical addresses, or portions of the addresses.In an example implementation, the entire cache may be dedicated to the virtual memory block, and the addresses of the virtual memory block are the only addresses that are cacheable. The translation table provides storage not only for physical addresses, but also includes flag storage elements that indicate whether the associated addresses are cacheable or non-cacheable. For example, storage elements 402 are set to a value that indicates that the addresses of the virtual memory block are cacheable, and all other storage elements, such as storage elements 404, are set to a value that indicates that other addresses are un-cacheable.In addition to the storage elements that indicate whether or not the associated addresses are cacheable, the translation table may include other attributes that indicate, for example, bufferability, secure/non-secure memory, sharability, strongly-ordered/normal memory, cache strategy (if cacheable bit set), and/or read-only/write-only.FIG. 5 shows multiple blocks 502 of memory and a cache memory 504 having locked addresses for a virtual memory block. Block indices are used in the diagram instead of memory addresses for ease of reference. Each index value corresponds to the base address of one of the blocks. The blocks next to the indices associated with the virtual memory block are shown with dashed lines to signify that a virtual memory block is not backed by physical memory circuitry. Specifically, the indices of the virtual memory block are 0-7, and the blocks adjacent to indices 0-7 are drawn with dashed lines.Each block, other than blocks associated with the virtual memory block, represents multiple words of addressable memory. For example, each block may represent 8, 16, 32, 64 or more words of storage. Although the example cache is a four-way set associative cache, it will be recognized that the example and teachings herein may be adapted to N-way associative caches. Each index may be cached to one of four different ways of the cache, as illustrated by the lines connecting the blocks representative of the virtual memory block to ways 506 and associated storage blocks of the cache 504. For example, indices 0 and 4 of the virtual memory block are cacheable to ways 0, 1 , 2, 3 of the first set 508 of ways.Depending on implementation requirements, portions of the cache may be locked by cache storage block or by cache way. In the example, ways 0 and 1 are locked, as indicated by the value stored in the lock storage elements ("lock bits") 507 associated with the locked ways. The indices of the virtual memory block are stored in tag storage elements 509 of the locked ways in the cache. In set 508, ways 0 and 1 are locked as indicated by the associated lock bits, and virtual memory block indices 0 and 4 are stored in the tag storage element of ways 0 and 1 ; in set 510, ways 0 and 1 are locked as indicated by the associated lock bits, and virtual memory block indices 1 and 5 are stored in the tag storage element of ways 0 and 1 ; in set 512, ways 0 and 1 are locked as indicated by the associated lock bits, and virtual memory block indices 2 and 6 are stored in the tag storage element of ways 0 and 1 ; in set 514, ways 0 and 1 are locked as indicated by the associated lock bits, and virtual memory block indices 3 and 7 are stored in the tag storage element of ways 0 and 1 .Ways 2 and 3 of the cache may be unlocked as indicated by the values of the associated lock bits. Blocks other than blocks of the virtual memory block may be cached in the unlocked ways 2 and 3 of the cache 504.FIG. 6 shows a process of configuring a programmable IC to implement a virtual memory block and locking addresses of the virtual memory block in a cache. At block 602, the virtual memory block is implemented in programmable logic resources of the programmable logic subsystem of the programmable IC. The virtual memory block may be provided to a designer as a predefined logic module, which the designer may instantiate in a circuit design, such as through a graphical user interface or a hardware description language (HDL). The designer may specify a depth or size of the virtual memory block. The logic module that defines the virtual memory block may be compiled with other portions of the design into configuration data for the programmable logic subsystem of the programmable IC. The virtual memory block is assigned to physical resources of the programmable logic subsystem by the design tool, and a hardware definition file is output to indicate the location of the virtual memory block to an operating system executing in the processor subsystem. The operating system or a user's application software may configure the translation table to indicate the portions of the memory address space that are cacheable and the portions of the memory address space that are not cacheable. For example, the subset of the address space assigned to the virtual memory block may be designated as cacheable, and other portions of the address space may be designated as non-cacheable. The configuration data that implements the virtual memory block may be loaded into the programmable logic subsystem to implement the design including the virtual memory block. As indicated above, the virtual memory block outputs a constant value, for example 0, in response to any input address in a read request. The virtual memory block need not respond to write requests, because once the cache is initialized, writes to addresses of the virtual memory block are resolved at the cache.The processing of blocks 604-622 may be performed by program code, such as by application code of a user or an operating system, executing on the processor core 106 of FIG. 1 , for example. At block 604, storage elements associated with the addresses of the virtual memory block in the translation table are set to a value that indicates the addresses are cacheable. If the entire cache is dedicated to the virtual memory block, storage elements associated with addresses other than the virtual memory block are set to a value that indicates the addresses are not cacheable. If less than the full cache is locked to the addresses of the virtual memory block, storage elements associated with addresses other than the virtual memory block may be set to the value that indicates the addresses are cacheable. The cache is prepared for initialization and locking of the virtual memory block at blocks 606 and 608. The exemplary process is for locking the virtual memory block to a second-level cache. At block 606, the first-level cache is disabled and invalidated. The disabling of the first-level cache disables predictive caching capabilities that the first-level cache may possess.Invalidating the first-level cache ensures that any addresses of the virtual memory block that were possibly present in the first-level cache are invalidated, and read requests to the addresses of the virtual memory block are directed to the virtual memory block.At block 608, all ways of the second-level cache are locked, and at block610, one of the ways of the second-level cache is unlocked. In locking and unlocking ways of the cache, values of the lock bits associated with the ways, as shown in FIG. 5, are adjusted accordingly. Processing one way at a time eliminates the possibility of locking a way that may have been intended to be cacheable. Also, processing one way at a time prevents processor and compiler optimizations, such as speculative fetches and out-of-order execution, from accessing the cache. Addresses of the virtual memory block that map to the unlocked way of the cache are used in issuing read requests at block 612.Because the first-level cache was invalidated, the read requests are passed to the virtual memory block. In response to each read request, the virtual memory block responds with a constant value, such as a word of 0-value bits, and at block 614, the constant value is stored in the second-level cache. In addition, the addresses of the read requests to the virtual memory block are stored in tag memory storage elements associated with the unlocked way of the cache. It will be recognized that a block of constant values may be output by the virtual memory block and stored in the second-level cache to correspond to a block of addresses of the virtual memory block.At block 616, all ways of the second-level cache are once again locked, and decision block 618 determines whether or not any additional ways of the cache should be processed. If the entire second-level cache is locked to the virtual memory block, then processing continues until all ways of the cache have been processed. If only a portion of the cache is needed for the virtual memory block, then processing continues until a sufficient number of ways have been processed to lock all the addresses of the virtual memory block. The process returns to block 610 for processing of another way of the cache. Otherwise, processing continues at block 620.At block 620, any ways of the second-level cache not used for the virtual memory block are unlocked, and at block 622, the second-level cache is ready for use with all or portions locked to addresses of the virtual memory block. At block 624, in response to memory access requests that reference the virtual memory block, the portion of the cache that is locked to the virtual memory block is accessed. If the portion of the cache that is locked to the virtual memory block is updated, update of physical memory beyond the cache is bypassed, because the only storage for the virtual memory block is provided by the cache.Some additional examples now follow.In one example, a method of managing memory in a programmable integrated circuit (IC), is disclosed. Such a method may include: configuring a memory map of the programmable IC with an association of a first subset of addresses of memory address space of the programmable IC to physical memory of the programmable IC; configuring the memory map with an association of a second subset of addresses of the memory address space to a virtual memory block; and locking at least a portion of a cache memory of the programmable IC to the second subset of addresses.Such a method may further include: implementing the virtual memory block as a circuit on the programmable IC, wherein the circuit that implements the virtual memory block returns a constant value in response to any input address in the second subset of addresses.Such a method may further include: accessing the locked portion of the cache memory in response to a memory access request that references an address in the second subset of addresses; and bypassing updating of the physical memory for updates to the locked portion of the cache memory.In some such method, the locking may include locking one or more ways of a plurality of ways of the cache memory.In some such method, the cache memory may be a second-level cache.In some such method, the programmable IC has a processor subsystem and a programmable logic subsystem, a second-level cache in the processor subsystem, a portion of the address space assigned to physical memory in the programmable logic subsystem, and the second subset of addresses associated with a subset of the portion of the address space assigned to physical memory in the programmable logic subsystem.Some such method may further include implementing the virtual memory block as a circuit in the programmable logic subsystem, wherein the circuit that implements the virtual memory block returns a constant value in response to any input address in the second subset of addresses.Some such method may further include storing a first value in storage elements associated with the addresses of the first subset, and storing a second value in storage elements associated with the addresses of the second subset, wherein the first value indicates that the addresses of the first subset are noncacheable, and the second value indicates that the addresses of the second subset are cacheable.In some such method, the locking may include: storing one or more addresses of the second subset in storage elements associated with one or more ways of the cache memory; storing a first value in one or more storage elements associated with the one or more ways of the cache memory, wherein the first value in the one or more storage elements indicates that the associated one or more ways are locked to the one or more addresses of the second subset.In some such method, the cache memory may be a multi-way set associative cache, and the method may further include: implementing the virtual memory block as a circuit on the programmable IC, wherein the circuit that implements the virtual memory block returns a constant value in response to any input address in the second subset of addresses; selecting one way of the cache memory; issuing read requests to the virtual memory block at addresses of the second subset of addresses and corresponding to the one way; storing the constant value returned from the circuit that implements the virtual memory block in memory of the selected one way of the cache memory; and repeating the selecting, issuing, and storing for one or more other ways of the cache memory.In some such method, the selecting, issuing, and storing for one or more other ways of the cache memory may be repeated until all the ways of the cache memory have been processed. In some such method, the selecting, issuing, and storing for one or more other ways of the cache memory may be repeated for fewer than all the ways of the cache memory.Some such method may further include: implementing the virtual memory block as a circuit on the programmable IC, wherein the circuit that implements the virtual memory block returns a constant value in response to any input address in the second subset of addresses.Some such method may further include: accessing the locked portion of the cache memory in response to a memory access request that references an address in the second subset; and bypassing updating of the physical memory for updates to the locked portion of the cache memory.Some such method may further include: storing a first value in storage elements associated with the addresses of the first subset, and storing a second value in storage elements associated with the addresses of the second subset, wherein the first value indicates that the addresses of the first subset are noncacheable, and the second value indicates that the addresses of the second subset are cacheable.In another example, a programmable IC is disclosed. Such an IC may include: a processor subsystem including memory circuitry that implements a first portion of memory address space of the programmable IC; a programmable logic subsystem including programmable logic circuitry and memory circuitry that implements a second portion of the memory address space; a cache circuit coupled to the memory circuitry of the processor subsystem and to the memory circuitry of the programmable logic subsystem; and a virtual memory block circuit implemented in the programmable logic circuitry, wherein the virtual memory block circuit may be responsive to addresses of a subset of the second portion of the memory address space; wherein: the cache circuit includes lock storage elements and tag storage elements associated with storage blocks of the cache circuit, a plurality of the tag storage elements are configured with addresses of the subset of the second portion of the memory address space, and one or more of the lock storage elements are configured with a first value that indicates that one or more associated storage blocks are locked to the addresses of the subset of the second portion of the memory address space in the plurality of tag storage elements. In such a programmable IC, the processor subsystem may include flag storage elements associated with the addresses of memory address space; a first subset of the flag storage elements are configured with a first value that indicates that caching of data from the first portion of the memory address space may be disabled; and a second subset of the flag storage elements are configured with a second value that indicates that caching of data from the second portion of memory address space may be enabled.In such a programmable IC, the virtual memory block circuit may be configured to return a constant value in response to any input address in the subset of the second portion of the memory address space.In such a programmable IC, the cache circuit may be configured and arranged to: access a storage block locked to an address of the subset of the second portion of the memory address space in response to a memory access request that references the address in the subset of the second portion of the memory address space; and bypass updating of the physical memory for an update to the storage block locked to the address of the subset of the second portion of the memory address space.In such a programmable IC, the processor subsystem may include a first- level cache coupled to the cache circuit, and the cache circuit may be a second- level cache.The methods and circuits are thought to be applicable to a variety of systems and applications. Other aspects and features will be apparent to those skilled in the art from consideration of the specification. For example, though aspects and features may in some cases be described in individual figures, it will be appreciated that features from one figure can be combined with features of another figure even though the combination is not explicitly shown or explicitly described as a combination. It is intended that the specification and drawings be considered as examples only, with a true scope of the invention being indicated by the following claims. |
A heat transfer device may be secured to an integrated circuit without the use of tools in some embodiments. After placing the integrated circuit in a socketed holder, the heat transfer device mount may be pivoted atop the integrated circuit. A heat transfer device may be attached to the mount. The mount may abut a holder that receives the integrated circuit. The mount may be latched to the holder by undergoing a series of simple mechanical displacements. |
What is claimed is: 1. A method comprising: pivoting a heat transfer device mount over an integrated circuit; translating said mount relative to said integrated circuit; and securing a heat transfer device to said mount in contact with said integrated circuit. 2. The method of claim 1 wherein pivoting said mount includes pivoting said mount with said heat transfer device attached to said mount. 3. The method of claim 1 including rotating said heat transfer device relative to said mount to cause a portion of said heat transfer device to extend through said mount. 4. The method of claim 3 including causing said mount to push against said integrated circuit. 5. The method of claim 4 including causing said portion to press upwardly on said mount. 6. The method of claim 1 wherein translating said mount includes causing a latch on said mount to engage a catch on a holder for said integrated circuit. 7. The method of claim 6 wherein pivoting the mount includes pivoting the latch on said mount into a catch on said holder. 8. The method of claim 7 wherein translating said mount includes causing said latch to move inwardly into said catch. 9. The method of claim 8 including rotating said heat transfer device relative to said mount to cause said latch to engage said catch. 10. The method of claim 9 wherein rotating said mount includes pivoting said mount about an axle. 11. The method of claim 10 wherein translating said mount includes causing said mount to move relative to said axle. 12. A heat transfer device comprising: a holder to receive an integrated circuit; a mount coupled to said holder for relative pivotal motion; and a latching mechanism that couples said mount to said holder when said mount is translated relative and substantially parallel to said holder. 13. The device of claim 12 wherein said latching mechanism includes a latch on said mount and a catch on said holder such that said latch pivots to a position adjacent said catch. 14. The device of claim 13 wherein said mount is translatable relative to said holder to cause said latch to engage said catch. 15. The device of claim 14 including an axle allowing said mount to pivot relative to said holder, wherein said mount is coupled to said holder for pivotal movement around said axle such that translation is allowed between said mount and said holder. 16. The device of claim 15 wherein said axle is retained within an elliptical journal to allow relative movement between said mount and said holder. 17. The device of claim 12 including a heat transfer device secured to said mount. 18. The device of claim 17 including a heat sink threadedly coupled to said mount. 19. The device of claim 18 including an active heat transfer device. 20. The device of claim 18 wherein said heat sink may be rotated to thread a portion of said heat sink through said mount. 21. The device of claim 20 including an integrated circuit held by said holder, wherein said portion engages the integrated circuit held by said holder. 22. The device of claim 21 wherein when said mount is secured to said holder, and said portion provides an upward force applied to said mount when said portion engages said integrated circuit. 23. A motherboard comprising: a circuit board; a socket coupled to said circuit board; a holder coupled to said circuit board around said socket; a processor secured in said socket; a mount coupled to said holder for relative pivotal motion; and a latching mechanism that couples said mount to said holder when said mount is translated relative and substantially parallel to said holder. 24. The motherboard of claim 23 wherein said latching mechanism includes a latch on said mount and a catch on said holder such that said latch pivots to a position adjacent said catch. 25. The motherboard of claim 24 wherein said mount is translatable relative to said holder to cause said latch to engage said catch. 26. The motherboard of claim 25 including an axle to allow said mount to pivot relative to said holder, wherein said mount is coupled to said holder for pivotal movement around said axle such that translation is allowed between said mount and said holder. 27. The motherboard of claim 26 wherein said axle is retained within an elliptical journal to allow relative movement between said mount and said holder. 28. The motherboard of claim 23 including a heat transfer device secured to said mount. 29. The motherboard of claim 28 including a heat sink threadedly coupled to said mount. 30. The motherboard of claim 29 including an active heat transfer device coupled to said heat sink. |
BACKGROUND This invention relates generally to heat sinks for integrated circuits. Because of the heat generated by some integrated circuits, an integrated circuit may be intimately associated with a heat transfer device that removes heat from an integrated circuit die. An integrated circuit die may be packaged and the package may be coupled to a heat transfer device. Alternatively, the die may be exposed for direct contact by the heat transfer device. A heat transfer device, such as a heat sink, has a high heat transfer coefficient. Processors may become excessively hot during operation. This heat may ultimately result in damage to the processor and may adversely affect the speed of its operation. Thus, it is desirable to contact the processor with a heat transfer device that removes heat. Heat transfer devices may be active or passive. An active heat transfer device normally includes a fan which forces air over the integrated circuit to increase its rate of heat transfer. A passive heat transfer device is generally a heat sink with desirable heat transfer characteristics. Combinations of active and passive heat transfer devices are commonly utilized. Attaching the heat transfer device over an integrated circuit on a circuit board can become a relatively complex operation. Generally, it is desirable to enable the removal of the integrated circuit device from the heat transfer device. This facilitates assembly and repair of the heat transfer device and testing of the integrated circuit. In many cases, the heat transfer device is relatively bulky. It is generally desirable to contact the integrated circuit device with the heat transfer device. Commonly, an integrated circuit electrically couples to a variety of contacts on a circuit board, for example using pins that engage slots in a socket or other carrier. Thus, the integrated circuit may be attached to the circuit board and the heat transfer device may be attached over the integrated circuit in a removable, electrically contacting engagement. Therefore, the connection of the integrated circuit to the circuit board and the association of the heat transfer device with the integrated circuit may become complex. For example, in connection with some designs, the attachment of the various components may require the use of tools. The use of tools generally results in longer assembly time. The assembler must assemble components and then grab a tool to secure the components together. It would be desirable to enable the connection of the heat transfer device to the integrated circuit holder without requiring the use of any tools. Moreover, it would be desirable to have a way to readily and easily associate these components with one another. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a perspective view of one embodiment of the present invention showing the placement of the integrated circuit; FIG. 2 is a perspective view of the embodiment of FIG. 1 after placement of the integrated circuit; FIG. 3 is a perspective view of the embodiment shown in FIG. 1 after the heat transfer device has been pivoted over the integrated circuit; FIG. 4 is a perspective view of the embodiment of FIG. 3 showing the process of securing a heat transfer device to the integrated circuit; FIG. 5 is a partial, enlarged cross-sectional view taken generally along the line 5--5 of FIG. 2; FIG. 6 is a partial, enlarged cross-sectional view corresponding to FIG. 5 but taken generally along the line 6--6 in FIG. 3; FIG. 7 is a partial, enlarged cross-sectional view taken generally along the line 6--6 in FIG. 3; FIG. 8 is a partial, enlarged cross-sectional view corresponding to FIG. 7 but taken along the line 8--8 in FIG. 4; FIG. 9 is a partial enlarged cross-sectional view taken generally along the line 8--8 in FIG. 4 after rotation of the heat transfer device to secure the heat sink to the integrated circuit; FIG. 10 is an exploded view of the heat transfer device in accordance with one embodiment of the present invention; and FIG. 11 is a side elevational view of the heat sink shown in FIG. 10. DETAILED DESCRIPTION Referring to FIG. 1, an electronic device 10 may include an integrated circuit 22 secured within a holder 14 in turn secured to a circuit board 12. A heat transfer device 35 may be secured to the holder 14 for pivotal movement around one of the edges of the holder 14. The holder 14, in one embodiment of the present invention, includes four sides 15 that define a frame around the integrated circuit 22. Each corner of the holder 14 may be secured by a threaded fastener 16 to the circuit board 12. In one embodiment of the present invention, the device 10 is a motherboard and the integrated circuit 22 is a processor. In one embodiment of the present invention, the integrated circuit 22 includes an organic layer grid array (OLGA) package. However, other packaging techniques may be utilized. In the illustrated package, the integrated circuit die 24 is exposed. The integrated circuit 22 may be secured to a socket 20 within the frame 14. The socket 20 may be secured directly to the circuit board 12 in one embodiment of the present invention. The socket 20 may include contacts (not shown) which mate with contacts (not shown) on the integrated circuit 22. In one embodiment of the present invention, the socket 20 may include pins that engage slots in the integrated circuit 22. However, any type of integrated circuit connection technique may be utilized. The heat transfer device 35 may include a threaded heat sink member 38, a heat sink mount 30, an active heat transfer device 36, an electrical connector 32 to supply power to the device 36, and a heat sink 34. In some embodiments, the active heat transfer device 36, which may include a fan 44, may not be included. While the heat sink 34 is shown as a pin type heat sink, any other heat sink design may be utilized including, for example, those that include fins. As shown in FIG. 2, the integrated circuit 22 may be placed on the socket 20 within the holder 14 with the die 24 facing upwardly. The heat transfer device 35 may rest in the overcenter position shown in FIG. 2. Interference between the holder 14 and the mount 30 may prevent further clockwise rotation from the position shown in FIG. 2. Referring to FIG. 3, the heat transfer device 35 may pivot counterclockwise around the pivotal connection 37 so that the pivotally mounted heat sink mount 30 rests on top of the holder 14. The heat sink member 38 is then in direct contact with the die 24, in accordance with one embodiment of the present invention. However, other integrated circuits 22 may be utilized and it is not essential (although it may be advantageous) that the heat sink member 38 directly contact the die 24. The pivotal connection 37 includes a slotted member 28, shown in FIG. 5, connected to, or integral with, the heat sink mount 30. An axle 46, associated with the holder 14, is journaled within an elliptical slot 47 inside the mount 30. Because the slot 47 is elongated, relative movement is possible between the member 28 and the axle 46. In other embodiments, the axle 46 may be included as part of the mount 30 and the member 28 may be a part of the holder 14. In any case, the arrangement of the axle 46, journaled within the member 28, allows pivotal motion of the heat transfer device 35 around the side 15b of the holder 14 (from the position shown in FIG. 2) until the device 35 contacts the holder 14 in face-to-face abutment, as shown in FIG. 3. While a technique is described in which the mount 30, heat sink 34, and active heat transfer device 36 are pivoted as a unit, other techniques may also be used. For example, the mount 30 may be pivoted on its own, and other components may be thereafter secured to the mount 30. The heat transfer device 35 assumes an "overbite" relationship with the holder 14, as shown in FIG. 3. Namely, a cantilevered latch 40 on one edge of the heat sink mount 30 extends over and beyond the side 43 of the holder 14 side 15a. Referring to FIG. 7, the latch 40 is L-shaped and rests on a land 41 on a catch 42. The side 43 of the catch 42 is offset from the surface 49 of the catch 42, forming an effective land or stop 41. Because the latch 40 extends outwardly past the surface 43, the relationship between the heat transfer device 35 and the side 43 may be described as an overbite relationship. In addition, the engagement of the latch 40 horizontal portion 39 with the land 41 controls the extent of pivotal movement between the heat transfer device 35 and the holder 14. This further aligns the horizontal portion 39 with the catch 42 defined within the holder 14. While an advantageous arrangement is shown in which the heat transfer device 35 pivots around a first side 15b of the holder 14 and latches on an opposed side 15a of the holder 14, other arrangements may be possible as well. For example, an intermediate latching mechanism may also be used. Referring again to FIG. 3, the heat transfer device 35 may then be translated in the direction indicated by the arrow A relative to the holder 14. In the illustrated embodiment, the heat transfer device 35 is translated along a plane parallel to the circuit board 12. It is translated in a direction that causes the latch 40 to move towards the pivot axle 46 (FIG. 5) and the catch 42 (FIG. 7). As a result, the arrangement of the axle 46 relative to the member 28 changes from that shown in FIG. 5 to that shown in FIG. 6. That is, there is relative translating motion between the axle 46 within the slot 47 and the member 28. At the same time, this translation causes the latch 40 and its horizontal portion 39 to fully engage the catch 42 and to abut against the rear surface 51 of the catch 42, as shown in FIG. 8. In this situation, the latch 40 may also abut against the surface 41 in one embodiment of the present invention. Thus, the latch 40 has now engaged the catch 42. However, the latch 40 is positioned at the bottom of the catch 42 relative to the heat transfer device 35. Referring next to FIG. 4, the heat transfer device 35 may be rotated in the direction indicated by the arrows B in accordance with one embodiment of the present invention. This may be done by rotating the active heat transfer device 36 and/or the heat sink 34 relative to the heat sink mount 30. This rotation causes the heat sink member 38 to screw into the heat sink mount 30 and to extend further downwardly. The member 38 continues to thread downwardly, in response to the rotation indicated by the arrow B, until the heat sink member 38 comes into tight contact with the integrated circuit 22. Thus, the heat transfer device 35 may be threaded into the heat sink mount 30. Upward motion of the heat transfer device 35 may be resisted by the engagement between the latch 40 and the catch 42. More particularly, as shown in FIG. 9, the latch 40 moves upwardly relative to the catch 42, in response to the rotation of heat transfer device 35, until the horizontal portion 39 engages the upper edge 53 of the catch 42 in the holder 14. In this position, the heat transfer device 35 is securely latched against motion relative to the holder 14. The force of the heat sink member 38 against the integrated circuit 22 and particularly the die 24 provides an upward force which secures the latch 40 to the holder 14 in one embodiment of the invention. Referring to FIG. 10, the heat transfer device 35 includes the active heat transfer device 36, the heat sink 34 and the mount 30. In one embodiment of the present invention, the active heat transfer device 36 and the heat sink 34 may be secured by threaded fasteners which engage the interstices between the upstanding pins 60 of the heat sink 34. The heat sink 34, shown in FIG. 11, includes a base plate 62 from which the pins 60 extend. In addition, the centrally located, downwardly depending threaded member 38 is connected to the plate 62. A wide variety of heat transfer devices may be used as the heat transfer device 35. The member 38 threadedly engages a ring 64 centrally located within the mount 30. As a result, either or both of the active heat transfer device 36 and heat sink 34 may be rotated to cause the threaded member 38 to thread through the plate 30 and to engage the die 24. The heat transfer device 35 may be easily and accurately secured onto the integrated circuit 22 without the use of tools in some embodiments. Through a simple pivot, translate and rotate motion, the necessary connections may be securely and advantageously made. While the present invention has been described with respect to a limited number of embodiments, those skilled in the art will appreciate numerous modifications and variations therefrom. It is intended that the appended claims cover all such modifications and variations as fall within the true spirit and scope of this present invention. |
Systems and techniques for stream selection from multi-stream storage devices. Notification of a KVS tree write request for a multi-stream storage device is received. The notification includes a KVS tree scope corresponding to data in the write request. A stream identifier (ID) is assigned to the write request based on the KVS tree scope and a stability value of the write request. The stream ID isreturned to govern stream assignment to the write request, the stream assignment modifying a write operation of the multi-stream storage device. |
1.A system includes processing circuitry configured to:Receiving a notification of a KVS tree write request to the multi-stream storage device, the notification including a KVS tree range corresponding to the data in the write request;Assigning a stream identifier ID to the write request based on the KVS tree range and a stability value of the write request; andThe flow ID is returned to manage a flow assignment to the write request, the flow assignment modifying a write operation of the multi-stream storage device.2.The system of claim 1 wherein the processing circuit is configured to assign the stability value based on the KVS tree range.3.The system of claim 2 wherein said stability value is one of a predefined set of stability values.4.The system of claim 3 wherein said predefined set of stability values comprises HOT, WARM, and COLD, wherein HOT indicates a minimum expected lifetime of said data on said multi-stream storage device and COLD indicates said multi-stream The highest expected life of the data on the storage device.5.The system of claim 2, wherein to assign the stability value, the processing circuit is configured to locate the stability value from a data structure using a portion of the KVS tree range.6.The system of claim 5 wherein said portion of said KVS tree range comprises a tree ID of said data.7.The system of claim 5 wherein said portion of said KVS tree range comprises a hierarchical ID of said data.8.The system of claim 5 wherein said portion of said KVS tree range comprises a type of said data.9.The system of claim 2 wherein to assign the stability value, the processing circuit is configured to:Maintaining a set of frequencies assigned to the stability value for the level ID, each member of the set of frequencies corresponding to a unique level ID;Retrieving a frequency corresponding to the hierarchical ID in the KVS tree range from the frequency set; andA stability value is selected from a map of stability values and frequency ranges based on the frequency.10.The system of claim 1 wherein said process ID is configured to assign said stream ID to said write request based on said KVS tree range and said stability value of said write request :Creating a stream range value according to the KVS tree range;Performing a lookup in the selected stream data structure using the stream range value;A stream ID corresponding to the stream range is returned from the selected stream data structure.11.The system of claim 10 wherein said searching is performed in said selected stream data structure, said processing circuit being configured to:Failed to find the stream range value in the selected stream data structure;Performing a lookup on the available stream data structure using the stability value;Receiving the result of the lookup including the stream ID; andAn entry is added to the selected stream data structure, the entry containing the stream ID, the stream range value, and a timestamp of when the entry was added.12.The system of claim 11 wherein said processing circuit is further configured to initialize said available stream data structure, said initializing comprising said processing circuit being configured to:Obtaining a plurality of streams obtainable from the multi-stream storage device;Obtaining stream IDs of all streams obtainable from the multi-stream storage device, each stream ID being unique;Add a stream ID to the stability value group; andCreating a record for each stream ID in the available stream data structure, the record including the stream ID, a device ID of the multi-stream storage device, and stability corresponding to a stability value group of the stream ID value.13.The system of claim 10 wherein said searching is performed in said selected stream data structure, said processing circuit being configured to:Failed to find the stream range value in the selected stream data structure;Locating the stream ID from the selected stream data structure or the available stream data structure based on the content of the selected stream data structure; andAn entry is created to the selected stream data structure, the entry containing the stream ID, the stream range value, and a timestamp of when the entry was added.14.The system of claim 13 wherein said stream ID is located from said selected stream data structure or available stream data structure based on said content of said selected stream data structure, said processing circuit being configured to :Comparing the number of first entries from the selected stream data structure with the number of second entries from the available stream data structure to determine that the number of first entries is equal to the number of second entries ;Locating an entry group corresponding to the stability value from the selected stream data structure; andReturns the stream ID of the entry with the oldest timestamp in the group of entries.15.At least one machine readable medium containing instructions that when executed by a machine cause the machine to perform operations comprising:Receiving a notification of a KVS tree write request to the multi-stream storage device, the notification including a KVS tree range corresponding to the data in the write request;Assigning a stream identifier ID to the write request based on the KVS tree range and a stability value of the write request; andThe flow ID is returned to manage a flow assignment to the write request, the flow assignment modifying a write operation of the multi-stream storage device.16.The at least one machine readable medium of claim 15, wherein the operations comprise assigning the stability value based on the KVS tree range.17.The at least one machine readable medium of claim 16, wherein the stability value is one of a predefined set of stability values.18.The at least one machine readable medium of claim 17, wherein the predefined set of stability values comprises HOT, WARM, and COLD, wherein the HOT indicates a minimum expected lifetime of the data on the multi-stream storage device and COLD Indicates the highest expected life of the data on the multi-stream storage device.19.The at least one machine readable medium of claim 16, wherein assigning the stability value comprises locating the stability value from a data structure using a portion of the KVS tree range.20.The at least one machine readable medium of claim 19, wherein the portion of the KVS tree range comprises a tree ID of the data.21.The at least one machine readable medium of claim 20, wherein the portion of the KVS tree range comprises a hierarchical ID of the data.22.The at least one machine readable medium of claim 21 wherein the portion of the KVS tree range comprises a node ID of the data.23.The at least one machine readable medium of claim 19, wherein the portion of the KVS tree range comprises a hierarchical ID of the data.24.The at least one machine readable medium of claim 19, wherein the portion of the KVS tree range comprises a type of the data.25.The at least one machine readable medium of claim 16, wherein assigning the stability value comprises:Maintaining a set of frequencies assigned to the stability value for the level ID, each member of the set of frequencies corresponding to a unique level ID;Retrieving a frequency corresponding to the hierarchical ID in the KVS tree range from the frequency set; andA stability value is selected from a map of stability values and frequency ranges based on the frequency.26.The at least one machine readable medium of claim 15, wherein assigning the stream ID to the write request based on the KVS tree range and the stability value of the write request comprises:Creating a stream range value according to the KVS tree range;Performing a lookup in the selected stream data structure using the stream range value; andA stream ID corresponding to the stream range is returned from the selected stream data structure.27.The at least one machine readable medium of claim 26 wherein performing the lookup in the selected stream data structure comprises:Failed to find the stream range value in the selected stream data structure;Performing a lookup on the available stream data structure using the stability value;Receiving the result of the lookup including the stream ID; andAn entry is added to the selected stream data structure, the entry containing the stream ID, the stream range value, and a timestamp of when the entry was added.28.The at least one machine readable medium of claim 27, wherein the operations comprise initializing the available stream data structure by:Obtaining a plurality of streams obtainable from the multi-stream storage device;Obtaining stream IDs of all streams obtainable from the multi-stream storage device, each stream ID being unique;Add a stream ID to the stability value group; andCreating a record for each stream ID in the available stream data structure, the record including the stream ID, a device ID of the multi-stream storage device, and stability corresponding to a stability value group of the stream ID value.29.The at least one machine readable medium of claim 26 wherein performing the lookup in the selected stream data structure comprises:Failed to find the stream range value in the selected stream data structure;Locating the stream ID from the selected stream data structure or the available stream data structure based on the content of the selected stream data structure; andAn entry is created to the selected stream data structure, the entry containing the stream ID, the stream range value, and a timestamp of when the entry was added.30.The at least one machine readable medium of claim 29, wherein locating the stream ID from the selected stream data structure or the available stream data structure based on the content of the selected stream data structure comprises:Comparing the number of first entries from the selected stream data structure with the number of second entries from the available stream data structure to determine that the number of first entries is equal to the number of second entries ;Locating an entry group corresponding to the stability value from the selected stream data structure; andReturns the stream ID of the entry with the oldest timestamp in the group of entries.31.A machine implemented method comprising:Receiving a notification of a KVS tree write request to the multi-stream storage device, the notification including a KVS tree range corresponding to the data in the write request;Assigning a stream identifier ID to the write request based on the KVS tree range and a stability value of the write request; andThe flow ID is returned to manage a flow assignment to the write request, the flow assignment modifying a write operation of the multi-stream storage device.32.The method of claim 31 comprising assigning the stability value based on the KVS tree range.33.The method of claim 31, wherein assigning the flow ID to the write request based on the KVS tree range and the stability value of the write request comprises:Creating a stream range value according to the KVS tree range;Performing a lookup in the selected stream data structure using the stream range value; andA stream ID corresponding to the stream range is returned from the selected stream data structure.34.The method of claim 33 wherein performing the lookup in the selected stream data structure comprises:Failed to find the stream range value in the selected stream data structure;Locating the stream ID from the selected stream data structure or the available stream data structure based on the content of the selected stream data structure; andAn entry is created to the selected stream data structure, the entry containing the stream ID, the stream range value, and a timestamp of when the entry was added.35.A system comprising:Means for receiving a notification of a KVS tree write request to the multi-stream storage device, the notification including a KVS tree range corresponding to data in the write request;Means for assigning a stream identifier ID to the write request based on the KVS tree scope and the stability value of the write request; andMeans for returning the flow ID to manage a flow assignment to the write request, the flow assignment modifying a write operation of the multi-stream storage device.36.The system of claim 35, comprising means for assigning the stability value based on the KVS tree range.37.The system of claim 35, wherein assigning the flow ID to the write request based on the KVS tree range and the stability value of the write request comprises:Creating a stream range value according to the KVS tree range;Performing a lookup in the selected stream data structure using the stream range value; andA stream ID corresponding to the stream range is returned from the selected stream data structure.38.The system of claim 37 wherein performing the lookup in the selected stream data structure comprises:Failed to find the stream range value in the selected stream data structure;Locating the stream ID from the selected stream data structure or the available stream data structure based on the content of the selected stream data structure; andAn entry is created to the selected stream data structure, the entry containing the stream ID, the stream range value, and a timestamp of when the entry was added. |
Flow selection for multi-stream storagePriority applicationThe present application claims the benefit of priority to U.S. Application Serial No. No. No. No. No. No. No. No. NoTechnical fieldEmbodiments described herein relate generally to a key value data store and, more particularly, to stream selection for a multi-stream storage device.Background techniqueThe data structure is a data organization that permits interaction with the data stored therein in various ways. The data structure can be designed to, in particular, permit efficient searching of data, for example, in a binary search tree, permitting efficient storage of sparse data, for example, using linked lists, or permitting efficient use of searchable data, for example, using B-trees storage.The key-value data structure accepts key-value pairs and is configured to respond to queries for keys. The key value data structure may contain structures such as dictionaries (eg, maps, hash maps, etc.) where the keys are stored in a list of links (or containing) corresponding values. While these structures are available in memory (e.g., in primary or system state memory as opposed to storage areas), the storage representation of these structures in persistent storage (e.g., on disk) can be inefficient. Therefore, a class of log-based storage structures has been introduced. An example is a log structured merge tree (LSM tree).Various LSM tree implementations exist, but many LSM tree implementations conform to designs in which key-value pairs are accepted into keyed in-memory structures. When the in-memory structure fills up, the data is distributed among several child nodes. The assignment causes the keys in the child nodes to be within the child nodes themselves and sequenced between the child nodes. For example, at the first tree level with three child nodes, the largest key in the leftmost child node is smaller than the smallest key from the intermediate child node and the largest key in the intermediate child node is smaller than the smallest key from the rightmost child node. This structure allows efficient searching of both keys and key ranges in the data structure.DRAWINGSIn the drawings (which are not necessarily drawn to scale), similar numbers may describe similar components in different views. Similar numbers with different letter suffixes may indicate different examples of similar components. The drawings generally illustrate the various embodiments discussed in this document by way of example and not limitation.FIG. 1 illustrates an example of a KVS tree in accordance with an embodiment.2 is a block diagram illustrating an example of writing to a multi-stream storage device, in accordance with an embodiment.FIG. 3 illustrates an example of a method to facilitate writing to a multi-stream storage device, in accordance with an embodiment.4 is a block diagram illustrating an example of storage organization for keys and values, in accordance with an embodiment.FIG. 5 is a block diagram illustrating an example of a configuration of a key block and a value block, according to an embodiment.FIG. 6 illustrates an example of a KB tree in accordance with an embodiment.Figure 7 is a block diagram illustrating the introduction of a KVS tree, in accordance with an embodiment.FIG. 8 illustrates an example of a method for KVS tree introduction, in accordance with an embodiment.9 is a block diagram illustrating key compression, in accordance with an embodiment.FIG. 10 illustrates an example of a method for key compression in accordance with an embodiment.11 is a block diagram illustrating key value compression, in accordance with an embodiment.Figure 12 illustrates an example of a method for key value compression, in accordance with an embodiment.Figure 13 illustrates an example of an overflow value and its relationship to a tree, in accordance with an embodiment.Figure 14 illustrates an example of a method for an overflow value function, in accordance with an embodiment.Figure 15 is a block diagram illustrating overflow compression, in accordance with an embodiment.Figure 16 illustrates an example of a method for overflow compression, in accordance with an embodiment.Figure 17 is a block diagram illustrating boosting compression, in accordance with an embodiment.FIG. 18 illustrates an example of a method for boosting compression, in accordance with an embodiment.FIG. 19 illustrates an example of a method for performing maintenance on a KVS tree, according to an embodiment.Figure 20 illustrates an example of a method for modifying a KVS tree operation, in accordance with an embodiment.21 is a block diagram illustrating a key search, in accordance with an embodiment.FIG. 22 illustrates an example of a method for performing a key search, according to an embodiment.23 is a block diagram illustrating key scans, in accordance with an embodiment.24 is a block diagram illustrating key scans, in accordance with an embodiment.Figure 25 is a block diagram illustrating a prefix scan, in accordance with an embodiment.26 is a block diagram illustrating an example of a machine on which one or more embodiments may be implemented.detailed descriptionThe LSM tree has become a popular storage structure for data where high capacity writes are expected and efficient access to data is expected. To support these features, portions of the LSM are tuned for the media on which the portion is maintained and the background process generally resolves to move the data between different portions (eg, from an in-memory portion to an on-disk portion). As used herein, in memory refers to a random access and byte addressable device (eg, static random access memory (SRAM) or dynamic random access memory (DRAM)), and on disk is referred to as a block addressable device. (For example, a hard disk drive, an optical disk, a digital versatile disk or a solid state drive (SSD), such as a flash memory based device), which is also referred to as a media device or storage device. The LSM tree uses the ready access provided by the in-memory device to sort the incoming data keys to provide ready access to the corresponding values. When data is merged onto the upper portion of the disk, the data residing on the disk is merged with the new data and written back to disk in blocks.While LSM trees have become a popular structure that forms the basis of several database and capacity storage (eg, cloud storage) designs, they do have some drawbacks. First, new data and old data are continually combined to keep the internal structure of the keys sorted causing significant write magnification. Write magnification is the increase in the minimum number of data writes imposed by a given memory technology. For example, to store data, write the data to disk at least once. This can be done, for example, by appending only the most recent piece of data to the end of the already written data. However, the search speed of this structure is slow (for example, it grows linearly with the amount of data), and can cause inefficiencies when changing or deleting data. The LSM tree increases write magnification because it reads data from the disk that will be merged with the new data and then rewrites the data back to disk. Write magnification issues can be exacerbated when including storage activity, such as defragmenting a hard drive or garbage collection of an SSD. Write magnification on SSDs can be particularly detrimental because these devices can be "depleted" with several writes. That is, the SSD has a finite lifetime measured by writing. Therefore, the write magnification of the SSD makes it possible to shorten the usable life of the underlying hardware.The second problem with LSM trees involves the large amount of space that can be consumed when performing a merge. The LSM tree ensures that some of the keys on the disk are sorted. If the amount of data residing on the disk is too large, a large amount of temporary or scratch space can be consumed to perform the merge. This can be slightly mitigated by partitioning the on-disk portion into non-overlapping structures to permit merging of subsets of data, but it can be difficult to achieve a balance between structural overhead and performance.A third issue with respect to LSM trees involves potentially limited write throughput. This problem stems from the fact that all of the LSM data is essentially always sorted. Therefore, large-capacity writes that overwhelm the portion of memory must wait until the portion of memory is cleared with a potentially time-consuming merge operation. To address this issue, a write buffer (WB) tree has been proposed in which smaller data insertions are manipulated to avoid merge problems in this scenario. Specifically, the WB tree hashes the incoming keys to spread the data and stores the key hashes and value combinations in a smaller intake set. These sets can be merged at various times or written to child nodes based on key hash values. This avoids expensive merge operations of the LSM tree while at the same time being high performance when looking for a particular key. However, key hashed sorted WB trees cause expensive whole tree scans to locate values that are not directly referenced by the key hash, such as when searching for key ranges.To address the issues described above, the KVS tree and corresponding operations are described herein. The KVS tree is a tree data structure that contains nodes that have connections between the parent node and the child nodes based on the predetermined export of the keys rather than the contents of the tree. The node contains a sequence of time-ordered sets of key values (kvset). The kvset contains key-value pairs in a keyed sort structure. Once Kvset is written, it is also immutable. The KVS tree implements the WB tree's write throughput while improving the WB tree search by maintaining kvsets in the nodes to provide an efficient search for kvsets containing sorted keys and (in the example) key metrics (eg cloth Long filter, minimum key and maximum key, etc.). In many instances, the KVS tree can improve the temporary storage problem of LSM trees by separating keys from values and merging smaller sets of kvsets. In addition, the described KVS tree can reduce write magnification by various maintenance operations on kvset. Furthermore, when the kvset in the node is immutable, problems such as write wear on the SSD can be managed by the data structure, thereby reducing the garbage collection activity of the device itself. This has the added benefit of freeing internal device resources (eg, bus bandwidth, processing cycles, etc.) to cause better external drive performance (eg, read or write speed). Additional details and example implementations of the KVS tree and operations thereon are described below.FIG. 1 illustrates an example of a KVS tree 100 in accordance with an embodiment. The KVS tree 100 is a key value data structure organized into trees. As a key value data structure, values are stored in tree 100 along with corresponding keys that reference the values. In particular, key entries are used to contain both keys and additional information (eg, references to values), however, unless otherwise specified, key entries are simply referred to as keys for simplicity. The keys themselves have a total order within the tree 100. Therefore, the keys can be sorted among each other. The key can also be divided into sub-keys. In general, a subkey is a non-overlapping portion of a key. In an example, the total ordering of keys is based on comparing similar subkeys between a plurality of keys (eg, comparing a first subkey of one key to the first subkey of another key). In the example, the key prefix is the beginning of the key. The key prefix may consist of one or more sub-keys (when the sub-key is used).Tree 100 contains one or more nodes, such as node 110. Node 110 contains a temporally sequenced set of immutable key values (kvset). As illustrated, kvset 115 includes a 'N' badge to indicate that it is the newest in the sequence, and kvset 120 contains an 'O' badge to indicate that it is the oldest in the sequence. Kvset 125 contains an 'I' badge to indicate that it is an intermediary in the sequence. These badges are used throughout to tag the kvset, however, another badge (eg 'X') represents a specific kvset rather than its position in the sequence (eg new, intermediate, old, etc.) unless it is The tilde '~', in this case it is only an anonymous kvset. As explained in more detail below, older key entry entries appear lower in tree 100. Therefore, raising the value by a tree level (eg, from L2 to L1) will result in a new kvset in the oldest position in the receiver node.Node 110 also contains a deterministic mapping of key-value pairs in the kvset of the node to any one of the child nodes 110. As used herein, the deterministic mapping means that given a key-value pair, the external entity can track the path of the possible child nodes through the tree 100 without knowing the content of the tree 100. This, for example, is quite different from the B-tree, for example, where the content of the tree will determine where the value of the given key will fall in order to maintain the search optimized structure of the tree. Alternatively, here, the deterministic mapping provides rules such that, for example, given a key-value pair, the child nodes at L3 to which this pair will be mapped may be calculated, even at the maximum tree level (eg, tree depth) Only at L1. In an example, the deterministic map includes a portion of a hash of a portion of a key. Therefore, the subkeys can be hashed to reach the map set. A portion of this set can be used for any given level of the tree. In an example, the portion of the key is the entire key. There is no reason not to use the entire key.In an example, the hash includes a plurality of non-overlapping portions, the plurality of non-overlapping portions including the portion of the hash. In an example, each of the plurality of non-overlapping portions corresponds to a hierarchy of trees. In an example, the portion of the hash is determined by the hierarchy of nodes in accordance with the plurality of non-overlapping portions. In an example, the maximum number of child nodes of the node is defined by the size of the portion of the hash. In an example, the size of the portion of the hash is a number of bits. These examples can be illustrated by employing a hash that produces a key of 8 bits. The eight bits can be divided into three sets: the first two bits; the third to sixth bits (which produce four bits); and the seventh and eighth bits. The child nodes may be bit set based metrics such that child nodes at the first level (eg, L1) have two-digit names, child nodes at the second level (eg, L2) have four-digit names, and the third level ( For example, a child node on L3) has a two-digit name. The expanded discussion is included below with respect to Figures 13 and 14.Kvset is a key and value store organized in nodes of tree 100. The immutability of Kvset means that kvset does not change once placed in a node. However, you can delete kvset and add some or all of its contents to the new kvset and so on. In the example, the immutability of kvset is also extended to any control or metadata contained within the kvset. This is generally possible because the content to which the metadata is applied is constant and therefore the metadata will typically also be static at that time.Also note that the KVS tree 100 does not need to be unique across the keys of the tree 100, but the kvset has only one of the keys. That is, each key in a given kvset is different from the other keys of the kvset. This last statement is true for a particular kvset and is therefore not applicable when, for example, kvset is versioned. Kvset versioning can be helpful for forming snapshots of data. In the case of a versioned kvset, the uniqueness of the keys in the kvset is determined by the combination of the kvset identification (ID) and the version. However, two different kvsets (eg, kvset 115 and kvset 120) may each contain the same key.In an example, the kvset includes a key tree to store key entries for the kvset key-value pairs. Various data structures can be used to efficiently store and retrieve unique keys in a key tree such as a binary search tree, B-tree, etc. (which may not even be a tree). In an example, the key is stored in a leaf node of the key tree. In an example, the largest key in any subtree of the key tree is in the rightmost entry of the rightmost child. In an example, the rightmost edge of the first node of the key tree is linked to a child node of the key tree. In an example, all of the keys in the subtree rooted at the child nodes of the key tree are greater than all of the keys in the first node of the key tree. This last few examples illustrate the features of the KB tree, as discussed below with respect to Figure 6.In an example, the key entry for the kvset is stored in a set of key blocks that contain a primary key block and zero or more extended key blocks. In an example, the members of the set of key blocks correspond to media blocks of a storage medium (eg, an SSD, a hard drive, etc.). In an example, each key block contains a header to identify it as a key block. In an example, the primary key block contains a list of media block identities for the one or more extended key blocks of the kvset.In an example, the primary key block contains a header of a key tree of the kvset. The header may contain several values to make it easier to interact with the key or generally the kvset. In an example, the primary key block or header contains a copy of the lowest key in the kvset's key tree. Here, the lowest key is determined by a predetermined sort order of the tree (eg, the total ordering of keys in tree 100). In an example, the primary key block contains a copy of the highest key in the kvset's key tree, the highest key being determined by a predetermined sort order of the tree. In an example, the primary key block contains a list of media block identities for the key tree of the kvset. In an example, the primary key block contains a Bloom filter header of the Bloom filter of the kvset. In an example, the primary key block contains a list of media block identities for the Bloom filter of the kvset.In an example, the value of the kvset is stored in a set of value blocks. Here, members of the set of value blocks correspond to media blocks of the storage medium. In an example, each value block contains a header to identify it as a value block. In an example, a value block contains a storage segment of one or more values with no spacing between the one or more values. Thus, the bits of the first value run on the storage medium as bits of the second value without protection, containers or other delimiters therebetween. In an example, the primary key block contains a list of media block identities of value blocks in the set of value blocks. Thus, the primary key block manages storage references to value blocks.In an example, the primary key block contains a set of metrics for the kvset. In an example, the set of indicators includes the total number of keys stored in the kvset. In an example, the set of metrics includes the number of keys having a tombstone value stored in the kvset. As used herein, the tombstone is a data marker indicating that the value corresponding to the key has been deleted. In general, the tombstone will reside in the key entry and will not consume the value block space for this key-value pair. The purpose of the tombstone is to mark the deletion of values while avoiding the potentially expensive operation of clearing values from the tree 100. Therefore, when the logically deleted sequenced search encounters a logical deletion, it is known that the corresponding value is deleted even if the expired version of the key-value pair resides at an older location within the tree 100.In an example, the set of metrics stored in the primary key block includes a sum of all key lengths of keys stored in the kvset. In an example, the set of indicators includes a sum of all value lengths of keys stored in the kvset. These last two indicators give an approximate (or exact) amount of storage area consumed by the kvset. In an example, the set of indicators includes an amount of unreferenced data (eg, unreferenced values) in a value block of the kvset. This final indicator gives an estimate of the space that can be recovered in the maintenance operation. Additional details of the key blocks and value blocks are discussed below with respect to Figures 4 and 5.In an example, tree 100 includes a first root 105 in a first computer readable medium of at least one machine readable medium and a second root 110 in a second computer readable medium of at least one computer readable medium. In an example, the second root is the only child root of the first root. In an example, the first computer readable medium is byte addressable and wherein the second computer is readable as block addressable. This is illustrated in Figure 1, where node 105 is in the MEM tree hierarchy to imply its in-memory location, while node 110 is at L0 to imply that it is in the element on the root disk of tree 100.The above discussion demonstrates various organizational attributes of the KVS tree 100. Operations to interact with the tree 100, such as tree maintenance (eg, optimization, garbage collection, etc.), searching, etc., are discussed below with respect to FIGS. 7-25. Before proceeding to these topics, Figures 2 and 3 illustrate techniques for implementing efficient use of multi-stream storage devices using the structure of the KVS tree 100.Storage devices or SSDs that include flash memory can operate more efficiently and have greater durability (eg, will not "wear") in the case of grouping data with similar lifetimes in a flash erase block. Storage devices including other non-volatile media may also benefit from grouping data having similar lifetimes, such as a stacked magnetic recording (SMR) hard disk drive (HDD). In this context, if data is deleted at the same time or in a relatively small time interval, the data has a similar lifetime. A method for deleting data on a storage device can include explicitly de-allocating, logically overwriting, or physically overwriting data on the storage device.Since the storage device may generally not be aware of the lifetime of the various data to be stored therein, the storage device may be a data access command (eg, read or write) that identifies a logical lifetime group associated with the data. Provide an interface. For example, the industry standard SCSI and the proposed NVMe storage device interface specify a write command that includes data to be written to the storage device and a numerical stream of life groups called streams corresponding to the data. Identifier (stream ID). A storage device supporting multiple streams is a multi-stream storage device.Temperature is the stability value used to classify the data, whereby the value corresponds to the relative probability that data will be deleted in any given time interval. For example, it is expected that the HOT data will be deleted (or changed) in one minute while the COLD data can be expected to last for one hour. In an example, a set of finite stability values can be used to specify this classification. In an example, the set of stability values can be {hot, warm, cold}, wherein in a given time interval, data classified as hot has a higher probability of deletion than data classified as warm, Data classified as warmer has a higher probability of deletion than data classified as cold.2 and 3 address assigning different stream IDs to different writes based on one or more attributes of a given stability value and data about one or more KVS trees. Thus, continuing with the previous example, for a given storage device, the first set of stream identifiers can be used with a write command that is classified as hot data, and the second set of stream identifiers can be written with data that is classified as warm. Used together, and the third stream identifier set can be used with a write command that is classified as cold data, where the stream identifier is in at most one of the three sets.The following terms are provided to facilitate discussion of the multi-stream memory device systems and techniques of Figures 2 and 3:The DID is the unique device identifier of the storage device.The SID is the stream identifier of the stream on a given storage device.TEMPSET is a set of finite temperature values.TEMP is an element of TEMPSET.The FID is the unique forest identifier for the KVS tree collection.The TID is the unique tree identifier of the KVS tree. The KVS tree 100 has a TID.LNUM is the hierarchical number in a given KVS tree, where for convenience the root node of the KVS tree is considered to be at tree level 0, the child node of the root node (if any) is considered to be at tree level 1, and so on. . Thus, as illustrated, the KVS tree 100 includes a tree level L0 (including nodes 110) to L3.NNUM is the number of a given node at a given level in a given KVS tree, where for convenience NNUM can be a number in the range zero to (NodeCount(LNUM)-1), where NodeCount(LNUM) is a tree level The total number of nodes at the point LNUM is such that each node in the KVS tree 100 is uniquely identified by a tuple (LNUM, NNUM). As illustrated in Figure 1, the complete listing of node tuples starting at node 110 and progressing from top to bottom, left to right would be:L0 (root): (0,0)L1: (1,0), (1,1), (1,2), (1,3), (1,4)L2: (2,0), (2,1), (2,2), (2,3)L3: (3,0), (3,1), (3,2), (3,3)KVSETID is the unique kvset identifier.WTYPE is the value KBLOCK or VBLOCK as discussed below.WLAST is a Boolean value (TRUE or FALSE) as discussed below.2 is a block diagram illustrating an example of writing to a multi-stream storage device (eg, device 260 or 265), in accordance with an embodiment. FIG. 2 illustrates a plurality of KVS trees, a KVS tree 205, and a KVS tree 210. As illustrated, each tree performs write operations 215 and 220, respectively. These write operations are handled by storage subsystem 225. The storage subsystem can be, for example, a device driver for device 260, and can be used to manage multiple devices (eg, device 260 and device 265) (eg, those devices present in an operating system, network attached storage devices, etc.) ) storage products. Storage subsystem 225 will complete the writing of the storage device in time in operations 250 and 255, respectively. Flow mapping circuit 230 provides a stream ID for a given write 215 for use in device write 250.In KVS tree 205, the immutability of kvset causes one or all of the kvset to be written or deleted at a time. Therefore, data including kvset has a similar lifetime. Data including new kvsets can be written to a single storage device or written to a number of storage devices (e.g., device 260 and device 265) using techniques such as erasure coding or RAID. Moreover, since the size of kvset can be greater than any given device write 250, writing to kvset can involve directing multiple write commands to a given storage device 260. To facilitate operation of the stream mapping circuit 230, one or more of the following may be provided for selecting a stream ID for each such write command 250:A) KVSETID of the written kvset;B) the DID of the storage device;C) the FID of the forest to which the KVS tree belongs;D) the TID of the KVS tree;E) the LNUM of the node in the KVS tree containing kvset;F) NNUM of nodes in the KVS tree containing kvset;G) If the write command is a key block for KVSETID on the DID, then WTYPE is KBLOCK, or if the write command is a value block for KVSETID on the DID, then WTYPE is VBLOCKH) If the write command is the last for the KVSETID on the DID, then WLAST is TRUE, and otherwise FALSEIn an example, for each such write command, a tuple (DID, FID, TID, LNUM, NNUM, KVSETID, WTYP, WLAST) called a stream map tuple may be sent to the stream mapping circuit 230. Stream mapping circuit 230 may then store the stream ID of subsystem 225 to be used in conjunction with write command 250 to respond.The stream mapping circuit 230 can include an electronic hardware implemented controller 235, an accessible stream ID (A-SID) table 240, and a selected stream ID (S-SID) table 245. The controller 235 is arranged to accept the stream map tuple as an input and respond with a stream ID. In an example, controller 235 is configured to a plurality of storage devices 260 and 265 that store a plurality of KVS trees 205 and 210. Controller 235 is arranged to obtain (eg, by configuration, query, etc.) the configuration of the accessible device. The controller 235 is also arranged to configure a set of stability values TEMPSET and configure, for each value TEMP in the TEMPSET, a fraction, a number, or other number of other determinants of the stream on the given storage device for passing the value Classified data is used.In an example, controller 235 is arranged to obtain (eg, retrieve via configuration, message, etc., retrieved from configuration device, firmware, etc.) a temperature assignment method. In this example, the temperature assignment method will be used to assign a stability value to the write request 215. In an example, the stream map tuple can include any one or more of DID, FID, TID, LNUM, NNUM, KVSETID, WTYPE, or WLAST and is used as the temperature performed by controller 235 to select stability value TEMP from TEMPSET The input to the assignment method. In an example, the KVS tree scope is a set of parameters specific to writing to a KVS tree component (eg, kvset). In an example, the KVS tree range includes one or more of FID, TID, LNUM, NNUM, or KVSETID. Thus, in this example, the stream map tuple can contain components of the KVS tree scope as well as device specific or write specific components, such as DID, WLAST or WTYPE. In an example, the stability or temperature range tuple TSCOPE is derived from the stream map tuple. The following is an example KVS tree-scope component that can be used to create TSCOPE:A) TSCOPE calculated as (FID, TID, LNUM);B) TSCOPE calculated as (LNUM);C) TSCOPE calculated as (TID);D) TSCOPE calculated as (TID, LNUM); orE) TSCOPE calculated as (TID, LNUM, NNUM).In an example, controller 235 can implement a static temperature assignment method. The static temperature assignment method may, for example, read the selected TEMP from a configuration file, a database, KVS tree metadata, or KVS tree 105TID or metadata in other databases (including metadata stored in the KVS tree TID) . In this example, these data sources contain a mapping from TSCOPE to stability values. In an example, the mapping may be cached (eg, based on activation of controller 235 or dynamically during later operations) to speed up the assignment of stability values as the write request arrives.In an example, controller 235 can implement a dynamic temperature assignment method. The dynamic temperature assignment method may calculate the selected TEMP based on the frequency at which kvset is written to the TSCOPE. For example, the frequency at which controller 235 performs a temperature assignment method for a given TSCOPE may be measured and clustered around TEMPS in TEMPSET. Thus, this calculation can, for example, define a set of frequency ranges and a mapping from each frequency range to a stability value such that the value of TEMP is determined by the frequency range containing the frequency at which kvset is written to TSCOPE.Controller 235 is arranged to obtain (eg, retrieve via configuration, message, etc., retrieved from configuration device, firmware, etc.) the flow assignment method. The flow assignment method will consume the KVS tree 205 aspect of the write 215 and the stability value (eg, from the temperature assignment) to generate the flow ID. In an example, controller 235 can use a flow map tuple (eg, including a KVS tree range) in the flow assignment method to select a flow ID. In an example, any one or more of DID, FID, TID, LNUM, NNUM, KVSETID, WTYPE, or WLAST and stability values may be used in a flow assignment method performed by controller 235 to select a flow ID. In an example, the stream range tuple SSCOPE is derived from the stream map tuple. The following is an example KVS tree-scope component that can be used to create SSCOPE:A) SSCOPE calculated as (FID, TID, LNUM, NNUM)B) SSCOPE calculated as (KVSETID)C) SSCOPE calculated as (TID)D) SSCOPE calculated as (TID, LNUM)E) SSCOPE calculated as (TID, LNUM, NNUM)F) SSCOPE calculated as (LNUM)Controller 235 can be arranged to initialize A-SID table 240 and S-SID table 245 prior to accepting input. The A-SID table 240 is a data structure (table, dictionary, etc.) that can store entries for tuples (DID, TEMP, SID) and can retrieve such entries having prescribed values of DID and TEMP. The symbol A-SID (DID, TEMP) refers to all entries of the A-SID table 240 (if present) having the specified values of DID and TEMP. In an example, the A-SID table 240 can be initialized for each of the configured storage devices 260 and 265 and the temperature values in the TEMPSET. The A-SID table 240 initialization may proceed as follows: For each configured storage device DID, the controller 235 may be arranged to:A) obtain the number of streams available on the DID, called SCOUNT;B) obtaining a unique SID for each of the SCOUNT streams on the DID;C) For each value in TEMPSET TEMP:a) calculating how many SCOUNT streams will be used for data sorted by TEMP, called TCOUNT, based on the configured deterministic factor of TEMP;b) selecting the TCOUNT SID of the DID that has not been entered in the A-SID table 240, and for each selected TCOUNT SID of the DID, creating an entry for the (DID, TEMP, SID) in the A-SID table 240 (eg ,Row).Thus, once initialized, the A-SID table 240 contains entries for which a unique SID is assigned for each configured storage device DID and the value TEMP in the TEMPSET. The techniques used to obtain the number of streams available to configured storage device 260 and the available SIDs for each stream vary from storage device interface, however, these streams are readily accessible via the interface of the multi-stream storage device.The S-SID table 245 maintains a stream record that is already in use (e.g., has become part of a given write). The S-SID table 245 is an entry that can store entries for tuples (DID, TEMP, SSCOPE, SID, timestamp) and can retrieve or delete such entries with specified values of DID, TEMP, and optionally SSCOPE (Table , dictionary, etc.). The notation S-SID (DID, TEMP) refers to all entries of the S-SID table 245 (if present) having the specified values of DID and TEMP. Like the A-SID table 240, the S-SID table 245 can be initialized by the controller 235. In an example, controller 235 is arranged to initialize S-SID table 245 for each of configured storage devices 260 and 265 and temperature values in TEMPSET.As described above, the entries in the S-SID table 245 represent current or already assigned streams for write operations. Thus, in general, the S-SID table 245 is empty after initialization, and the entry is created by the controller 235 when the stream ID is assigned.In an example, controller 235 can implement a static flow assignment method. The static flow assignment method selects the same flow ID for a given DID, TEMP, and SSCOPE. In an example, the static flow assignment method may determine whether the S-SID (DID, TEMP) has an entry for SSCOPE. If there is no match entry, the static flow assignment method selects the flow ID SID from the A-SID (DID, TEMP) and creates an entry for the (DID, TEMP, SSCOPE, SID, timestamp) in the S-SID table 245. , where the timestamp is the current time after the selection. In the example, the selection from the A-SID (DID, TEMP) is random or the result of a cyclic process. Once the entry from the S-SID table 245 is found or created, the flow ID SID is passed back to the storage subsystem 225. In the example, if WLAST is true, the entries for (DID, TEMP, SSCOPE) in S-SID table 245 are deleted. This last example demonstrates the usefulness of having WLAST signal the completion of the write 215 of the kvset or the like that was originally known to the tree 205 but unknown to the storage subsystem 225.In an example, controller 235 can implement a least recently used (LRU) flow assignment method. The LRU stream assignment method selects the same stream ID for a given DID, TEMP, and SSCOPE in a relatively small time interval. In an example, the LRU assignment method determines if the S-SID (DID, TEMP) has an entry for SSCOPE. If the entry exists, the LRU assignment method selects the flow ID in this entry and sets the timestamp in this entry in the S-SID table 245 to the current time.If the SSCOPE entry is not in the S-SID (DID, TEMP), the LRU flow assignment method determines whether the number of entries S-SID (DID, TEMP) is equal to the number of entries A-SID (DID, TEMP). If this is true, the LRU assignment method selects the stream ID SID from the entry with the oldest timestamp in the S-SID (DID, TEMP). Here, the entry in the S-SID table 245 is replaced with a new entry (DID, TEMP, SSCOPE, SID, timestamp), where the timestamp is the current time after the selection.If there are fewer S-SSID (DID, TEMP) entries for the A-SID (DID, TEMP) entry, then the method selects the flow ID SID from the A-SID (DID, TEMP) such that the S-SID (DID, TEMP) There is no entry with the selected stream ID in it and an entry for (DID, TEMP, SSCOPE, SID, timestamp) is created in the S-SID table 245, where the timestamp is the current time after the selection.Once the entry from the S-SID table 245 is found or created, the flow ID SID is passed back to the storage subsystem 225. In the example, if WLAST is true, the entries for (DID, TEMP, SSCOPE) in S-SID table 245 are deleted.In operation, controller 235 is configured to assign a stability value to a given stream mapping tuple received as part of write request 215. Once the stability value is determined, the controller 235 is arranged to assign an SID. The temperature assignment method and the flow assignment method may each reference and update the A-SID table 240 and the S-SID table 245. In an example, controller 235 is also arranged to provide the SID to a requestor, such as storage subsystem 225.The use of stream IDs based on the KVS tree range permits similar data to be co-located in the erase block 270 on the multi-stream storage device 260. This reduces the use of dead cells on the device and thus increases device performance and lifetime. This benefit can be extended to multiple KVS trees. KVS trees can be used in forests or groves, whereby several KVS trees are used to implement a single structure, such as a file system. For example, a KVS tree may use a block number as a key and a bit in the block as a value, while a second KVS tree may use a file path as a key and a block number list as the value. In this example, the kvset of the given file referenced by the path is likely to have a similar lifetime as the kvset holding the block number. Therefore the inclusion of the above FID.The structures and techniques described above provide several advantages in systems that implement KVS trees and storage devices such as flash memory devices. In an example, a computing system implementing several KVS trees stored on one or more storage devices can use the knowledge of the KVS tree to more efficiently select streams in a multi-stream storage device. For example, the system can be configured such that the number of simultaneous write operations (eg, incoming or compressed) performed on the KVS tree is based on temperature classification on any given storage device (assigned to write by these write operations) The incoming kvset data is defined by the number of reserved streams. This is possible because within kvset, the lifetime of the data is expected to be the same when kvset is all written and deleted. The keys can be separated from the values as described elsewhere. Thus, when performing the key compression discussed below, the key write will have the same lifetime that may be shorter than the value lifetime. In addition, the tree hierarchy appears to be a strong indication of data life, older data, and therefore a larger (eg, deeper) tree level (having a longer life than younger data at higher tree levels).The following scenario may further clarify the operation of the stream mapping circuit 230 to define writes, taking into account:A) Temperature value {hot, cold}, where the H stream on a given storage device is used for data classified as hot, and the C stream on a given storage device is used for data classified as cold.B) A temperature assignment method configured with TSCOPE calculated as (LNUM), whereby a data temperature value of L0 written to any KVS tree is assigned a hot temperature value, and is written to L1 in any KVS tree. Or larger levels of data assign cold temperature values.C) An LRU stream assignment method configured with SSCOPE calculated as (TID, LNUM).In this case, the total number of simultaneous import and compression operations (operations that generate writes) for all KVS trees follows these conditions: the synchronization pull operation of all KVS trees is at most H because the data for all incoming operations is written Level 0 in the KVS tree and therefore will be classified as hot, and the synchronous compression operation of all KVS trees is at most C, because data for all overflow compression and most other compression operations is written to level 1 or greater The hierarchy and therefore will be classified as cold.Other such definitions are possible and may be advantageous depending on the particular implementation details of the KVS tree and controller 235. For example, given controller 235 configured as above, it may be advantageous to introduce a fraction of the number of operations H (eg, one-half) and a fraction of the number of compression operations (eg, three-quarters). Because the LRU stream assignment in the case where SSCOPE is calculated as (TID, LNUM) may not utilize WLAST in the stream map tuple to remove the unneeded S- immediately after receiving the last write of a given KVSET in the TID The SID table 245 entries, resulting in a suboptimal SID selection.Although the operations of flow mapping circuit 230 are described above in the context of a KVS tree, other structures such as LSM tree implementations may equally benefit from the concepts presented herein. Many LSM tree variants store a collection of key-value pairs and tombstones, whereby a given collection can be created by a referral operation or a garbage collection operation (commonly referred to as a compression or merge operation), and then later due to subsequent import operations or The garbage collection operation is completely deleted. Thus, as with data including kvsets in a KVS tree, data including this set has a similar lifetime. Thus, tuples similar to the flow map tuples above may be defined for most other LSM tree variants, where a key-value pair created by a referral operation or a garbage collection operation in a given LSM tree variant or Replace the KVSETID with the unique identifier of the tombstone collection. Stream mapping circuitry 230 may then be used to select a stream identifier for the plurality of write commands that store data comprising the set of key values and the logical deletions as described.FIG. 3 illustrates an example of a method 300 to facilitate writing to a multi-stream storage device, in accordance with an embodiment. The operation of method 300 is implemented using, for example, electronic hardware (e.g., circuitry) as described throughout the application (including FIG. 26 below). Method 300 provides several examples to implement the discussion above with respect to FIG.At operation 305, a notification of a KVS tree write request to the multi-stream storage device is received. In an example, the notification includes a KVS tree range corresponding to the data in the write request. In an example, the KVS tree range includes at least one of: a kvset ID corresponding to a kvset of the data; a node ID corresponding to a node of a KVS tree corresponding to the data; and corresponding to the a hierarchy ID corresponding to a tree hierarchy of the data; a tree ID of the KVS tree; a forest ID corresponding to a forest to which the KVS tree belongs; or a type corresponding to the data. In an example, the type is a key block type or a value block type.In an example, the notification includes a device ID of the multi-stream device. In an example, the notification includes a WLAST flag corresponding to a last write request in a write request sequence that writes a kvset identified by the kvset ID to the multi-stream storage device.At operation 310, a stream identifier (ID) is assigned to the write request based on the KVS tree scope and the stability value of the write request. In an example, assigning the stability value comprises: maintaining a set of frequencies for a stability value assignment for a hierarchy ID corresponding to a tree level level, each member of the frequency set corresponding to a unique level ID; retrieving from the frequency set a frequency corresponding to the level ID in the KVS tree range; and selecting a stability value from the mapping of the stability value to the frequency range based on the frequency.In an example, assigning the flow ID to the write request based on the KVS tree scope and the stability value of the write request includes creating a flow range value in accordance with the KVS tree range. In an example, the stream range value includes a level ID of the data. In an example, the flow range value contains a tree ID of the data. In an example, the stream range value includes a level ID of the data. In an example, the flow range value contains a node ID of the data. In an example, the stream range value contains a kvset ID of the data.In an example, assigning the flow ID to the write request based on the KVS tree range and the stability value of the write request further comprises using the flow range value in a selected flow data structure Perform a lookup. In an example, performing the lookup in the selected stream data structure includes failing to find the stream range value in the selected stream data structure; performing a lookup on the available stream data structure using the stability value Receiving a result of the lookup including a stream ID; and adding an entry to the selected stream data structure, the entry including the stream ID, the stream range value, and time of time when the entry is added stamp. In an example, the plurality of entries of the available stream data structure correspond to the stability value, and wherein the result of the lookup is at least one of a round or random selection of entries from the plurality of entries . In an example, the available stream data structure can be initialized by obtaining a plurality of streams obtainable from the multi-stream storage device; obtaining a stream ID of all streams available from the multi-stream storage device, each The first-class ID is unique; a stream ID is added to the stability value group; and a record for each stream ID is created in the available stream data structure, the record including the stream ID, the multi-stream storage device a device ID and a stability value corresponding to the stability value group of the stream ID.In an example, performing the lookup in the selected stream data structure includes failing to find the stream range value in the selected stream data structure; based on the content of the selected stream data structure Determining the stream ID or the available stream data structure to locate the stream ID; and creating an entry to the selected stream data structure, the entry containing the stream ID, the stream range value, and when the entry is added Timestamp of time. In an example, locating the stream ID from the selected stream data structure or the available stream data structure based on the content of the selected stream data structure comprises: the number of first entries from the selected stream data structure Comparing with the number of second entries from the available stream data structure to determine that the number of first entries is equal to the number of second entries; from the selected stream data structure location and the stability a group of entries corresponding to the value; and a stream ID of the entry having the oldest timestamp in the group of entries. In an example, locating the stream ID from the selected stream data structure or the available stream data structure based on the content of the selected stream data structure comprises: the number of first entries from the selected stream data structure Comparing with the number of second entries from the available stream data structure to determine that the number of first entries is not equal to the number of second entries; using the ones in the entries of the selected stream data structure a stability value and a flow ID performing a lookup on the available stream data structure; receiving a result of the lookup including a stream ID not in the entry of the selected stream data structure; and adding an entry to the selection A streamer data structure, the entry containing the stream ID, the stream range value, and a timestamp of when the entry was added.In an example, assigning the flow ID to the write request based on the KVS tree range and the stability value of the write request further comprises returning from the selected flow data structure with the The stream ID corresponding to the stream range. In an example, returning the stream ID corresponding to the stream range from the selected stream data structure includes updating a timestamp of an entry in the selected stream data structure corresponding to the stream ID. In an example, the write request includes a WLAST flag, and wherein returning the stream ID corresponding to the stream range from the selected stream data structure comprises removing from the selected stream data structure and The entry corresponding to the stream ID.In an example, method 300 can be extended to include removing an entry from the selected stream data structure with a timestamp that exceeds a threshold.At operation 315, the flow ID is returned to manage a flow assignment to the write request, wherein the flow assignment modifies a write operation of the multi-stream storage device.In an example, method 300 can optionally be extended to include assigning the stability value based on the KVS tree range. In an example, the stability value is one of a predefined set of stability values. In an example, the predefined set of stability values includes HOT, WARM, and COLD, wherein HOT indicates a minimum expected lifetime of the data on the multi-stream storage device and COLD indicates the on the multi-stream storage device The highest life expectancy of the data.In an example, assigning the stability value includes locating the stability value from a data structure using a portion of the KVS tree range. In an example, the portion of the KVS tree range contains a hierarchical ID of the data. In an example, the portion of the KVS tree range contains the type of data.In an example, the portion of the KVS tree range contains a tree ID of the data. In an example, the portion of the KVS tree range contains a hierarchical ID of the data. In an example, the portion of the KVS tree range contains the node ID of the data.4 is a block diagram illustrating an example of storage organization for keys and values, in accordance with an embodiment. Kvset can be stored using a key block that holds keys (as needed and logically deleted) and a value block to hold values. For a given kvset, the key block may also contain indicators and other information (eg, Bloom filters) for efficiently locating a single key, locating a key range, or generating all of the keys in the kvset (including key tombstones) The total ordering is used and used to obtain the value associated with the key (if present).A single kvset is shown in FIG. The key block includes a primary key block 410 (including a header 405) and an extended key block 415 (including an extended header 417). The value blocks include headers 420 and 440 and values 425, 430, 435, and 445, respectively. The second value block also contains free space 450.The tree representation of Kvset is illustrated as spanning key blocks 410 and 415. In this illustration, the leaf node contains a value reference (VID) for values 425, 430, 435, and 445 and two keys with logical deletion. This illustration shows that in an instance, a tombstone does not have a corresponding value in a value block, even though it can be called a key-value pair of a certain type.The illustration of the value block demonstrates that each value block can have headers and values that are adjacent to one another without delineation. A reference to a particular bit in a value block, such as the value of value 425, is typically stored, for example, in an offset and extended format in the corresponding key entry.FIG. 5 is a block diagram illustrating an example of a configuration of a key block and a value block, according to an embodiment. The key block and value block organization of Figure 5 illustrates the general simplicity of the extended key block and value block. In particular, each is typically a simple storage container with a header and possible size to identify its type (eg, a key block or value block), location on the storage area, or other metadata. . In an example, the value block contains a header 540 with a magic number indicating it is a value block and a storage area 545 for storing the value bits. The key extension block contains a header 525 indicating that it is an extension block and stores a portion 530 of the key structure, such as a KB tree, a B-tree, and the like.The primary key block provides location for many kvset metadata in addition to simply storing the key structure. The primary key block contains the root 520 of the key structure. The primary key block can also include a header 505, a Bloom filter 510, or a portion 515 of the key structure.References to components of the primary key block are included in the header 505, such as the block or root node 520 of the Bloom filter 510. For example, kvset size, value block address, compression performance, or metrics used may also be included in header 505.The Bloom filter 510 is calculated when creating the kvset and provides a ready mechanism to determine if the key is not in the kvset without performing a search on the key structure. This advancement allows for greater efficiency of the scanning operations as described below.FIG. 6 illustrates an example of a KB tree 600 in accordance with an embodiment. The example key structure that will be used in the key block of kvset is the KB tree. The KB tree 600 has a structural similarity to the B+ tree. In an example, KB tree 600 has 4096 byte nodes (eg, nodes 605, 610, and 615). All keys of the KB tree reside in leaf nodes (eg, node 615). An internal node (e.g., node 610) has a copy of the selected leaf node key to navigate the tree 600. The result of the key lookup is a value reference, which can be (in the example) for the value block ID, offset, and length.The KB tree 600 has the following properties:A) All keys in the subtree rooted at the child nodes of the edge key K are less than or equal to K.B) The largest key in any tree or subtree is the rightmost entry in the rightmost leaf node.C) Given a node N with a rightmost edge pointing to the child node R, all keys in the subtree rooted at node R are larger than all keys in node N.The KB tree 600 can be searched among the keys in the root node 605 via a binary search to find the appropriate "edge" key. You can follow the link to the child nodes of the edge key. This process is then repeated until a match is found in leaf node 615 or a match is not found.Since the kvset is created and does not change, the create KB tree 600 can be different from other tree structures that change over time. The KB tree 600 can be created from the bottom up. In an example, leaf node 615 is first created, followed by its parent node 610, and so on, until one node (root node 605) is left. In the example, the creation begins with a single empty leaf node (current node). Add each new key to the current node. When the current node becomes full, a new leaf node is created and it becomes the current node. When the last key is added, all leaf nodes are complete. At this point, the node at the next upper level (ie, the parent node of the leaf node) is created in a similar manner using the largest key from each leaf node as the input stream. The hierarchy is complete when the key is depleted. This process is repeated until the most recently created hierarchy consists of a single node (root node 605).If the current key block becomes full during creation, the new node can be written to the extended key block. In an example, the edge spanning from the first key block to the second key block includes a reference to the second key block.Figure 7 is a block diagram illustrating the introduction of a KVS tree, in accordance with an embodiment. In the KVS tree, the process of writing a new kvset to the root node 730 is referred to as introduction. Key-value pairs 705 (including logical deletions) are accumulated in memory 710 of the KVS tree and organized into several kvsets ordered from the latest kvset 715 to the oldest kvset 720. In an example, kvset 715 can be variable to accept key-value pairs synchronously. This is the only variable kvset variant in the KVS tree.The introduction 725 writes the key-value pairs and logical deletions in the oldest kvset 720 in the primary memory 710 to the new (and latest) kvset 735 in the root node 730 of the KVS tree, and then deletes the from the main memory 710. Kvset720.FIG. 8 illustrates an example of a method 800 for KVS tree introduction, in accordance with an embodiment. The operation of method 800 is implemented using, for example, electronic hardware (e.g., circuitry) as described throughout the application (including FIG. 26 below).At operation 805, a set of key values (kvset) is received for storage in the key value data structure. Here, the key value data structure is organized into a tree and the kvset contains a unique key to value mapping. The keys and values of the kvset are immutable and the nodes of the tree have temporally sequenced kvset sequences.In an example, when kvset is written to at least one storage medium, the kvset is immutable. In an example, the key entry of the kvset is stored in a set of key blocks comprising a primary key block and zero or more extended key blocks. Here, the members of the set of key blocks correspond to media blocks of at least one storage medium, wherein each key block contains a header to identify it as a key block.In an example, the primary key block contains a list of media block identities for the one or more extended key blocks of the kvset. In an example, the primary key block contains a list of media block identities of value blocks in the set of value blocks. In an example, the primary key block contains a copy of the lowest key in the kvset's key tree, the lowest key being determined by a predetermined sort order of the tree. In an example, the primary key block contains a copy of the highest key in the kvset's key tree, the highest key being determined by a predetermined sort order of the tree. In an example, the primary key block contains a header of a key tree of the kvset. In an example, the primary key block contains a list of media block identities for the key tree of the kvset. In an example, the primary key block contains a Bloom filter header of the Bloom filter of the kvset. In an example, the primary key block contains a list of media block identities for the Bloom filter of the kvset.In the example, the value is stored in a value block set, operation 805. Here, the members of the set of value blocks correspond to media blocks of the at least one storage medium, wherein each value block contains a header to identify it as a block of values. In an example, a value block contains a storage section of one or more values that do not have an interval between values.In an example, the primary key block contains a set of metrics for the kvset. In an example, the set of indicators includes the total number of keys stored in the kvset. In an example, the set of indicators includes the number of keys having logical delete values stored in the kvset. In an example, the set of indicators includes a sum of all key lengths of keys stored in the kvset. In an example, the set of indicators includes a sum of all value lengths of keys stored in the kvset. In an example, the set of indicators includes an amount of unreferenced data in a value block of the kvset.At operation 810, the kvset is written to the kvset sequence of the root node of the tree.Method 800 can be extended to include operations 815 through 825.At operation 815, keys and corresponding values to be stored in the key value data structure are received.At operation 820, the key and the value are placed in a preliminary kvset that is variable. In an example, the rate of writing to the preliminary root node exceeds a threshold. In this example, method 800 can be extended to suppress write requests to the key value data structure.At operation 825, the kvset is written to the key value data structure upon reaching the indicator. In an example, the indicator is the size of the preliminary root node. In an example, the indicator is elapsed time.Once the introduction has taken place, various maintenance operations can be used to maintain the KVS tree. For example, if a key is written with a first value at one time and a second value at a later time, then removing the first key value pair will free up space or reduce search time. To solve some of these problems, KVS trees can use compression. Details of several compression operations are discussed below with respect to Figures 9-18. The illustrated compression operation is a garbage collection form because it can remove stale data, such as key or key-value pairs, during the merge.Compression occurs under various triggering conditions, such as when a kvset in a node meets a prescribed or calculated criterion. An example of this compression criterion includes the total size of the kvset or the amount of garbage in the kvset. An example of a garbage unit in a Kvset is, for example, a key-value pair in a kvset that is changed due to a key-value pair or a logical deletion in a newer kvset or a key-value pair that has violated the time-to-life constraint and other reasons. Or tombstone. Another example of a garbage unit in a Kvset is unreferenced data (unreferenced values) in a value block generated by key compression.In general, the input to the compression operation is some or all of the kvsets in the kvset in the node when the compression criteria are met. These kvsets are called merge sets and include temporally consecutive sequences of two or more kvsets.Since compression is typically triggered when new data is introduced, method 800 can be extended to support compression, however, the following operations can also be triggered to perform maintenance when, for example, there are free processing resources or other convenient scenarios.Therefore, the KVS tree can be compressed. In an example, compression is performed in response to a trigger. In an example, the trigger is an expiration of a time period.In an example, the trigger is an indicator of a node. In an example, the indicator is the total size of the node's kvset. In an example, the indicator is the number of kvsets of the node. In an example, the indicator is the total size of the node's unreferenced values. In an example, the indicator is the number of unreferenced values.9 is a block diagram illustrating key compression, in accordance with an embodiment. Key compression reads keys and logical deletes from merged sets instead of values, removes all stale keys or tombstones, writes the resulting keys and tombstones into one or more new kvsets (for example, by writing to new key blocks) Medium), delete the key storage area from the node instead of the value. The new kvset is atomically replaced and logically equivalent to the merge set in terms of content and the kvset in the node from the placement to the oldest to the oldest logical sequence.As illustrated, kvset KVS3 (latest), KVS2, and KVS1 (oldest) experience key compression of the node. When the key storage areas of these kvsets are merged, conflicts on keys A and B occur. Since the new kvset (KVS4 (illustrated below)) may contain only one of each merged key, the conflict is resolved to support the most recent (as illustrated leftmost) key, thereby referring to keys A and B, respectively. Value ID 10 and value ID 11. Key C does not have a conflict and will therefore be included in the new kvset. Therefore, the key entry that is part of the new kvset (KVS4) is shaded in the top node.For illustrative purposes, KVS4 is drawn to span KVS1, KVS2, and KVS3 in the node, and value entries are drawn in similar locations in the node. The purpose of these locations is to prove that the key is not changed in the key compression, but only the key. As explained below, this provides a more efficient search by reducing the number of kvsets searched in any given node and can also provide valuable insights to guide maintenance operations. Note also that the values 20 and 30 are illustrated in dashed lines to indicate that they remain in the node but are no longer referenced by the key entry because their corresponding key entries were removed in compression.When a new kvset (eg, KVS5) can be placed in the latest position of KVS3 or KVS4 during compression (eg, on the left), the key is compressed to be non-blocking because by definition, the added kvset will logically compare to the key The kvset generated by compression (for example, KVS4) is new.FIG. 10 illustrates an example of a method 1000 for key compression in accordance with an embodiment. The operation of method 1000 is implemented using, for example, electronic hardware (e.g., circuitry) as described throughout the application (including FIG. 26 below).At operation 1005, a subset of kvsets from the kvset sequence of nodes is selected. In an example, the kvset subset is a continuous kvset and contains the oldest kvset.At operation 1010, the conflicting key set is located. The members of the conflicting key set include key entries in at least two of the kvset sequences of the node.At operation 1015, the most recent key entry for each member of the conflicting key set is added to the new kvset. In the instance where the node does not have a child node and wherein the kvset subset contains the oldest kvset, the recent key entry of each member of the conflicting key set is written to the new kvset and will not Writing an entry for each of the members of the kvset subset of the conflicting key set to the new kvset includes omitting any key entries that include a tombstone. In the instance where the node does not have a child node and wherein the kvset subset contains the oldest kvset, the recent key entry of each member of the conflicting key set is written to the new kvset and will not An entry for each of the members of the kvset subset in the conflicting key set is written to the new kvset containing any key entries that are omitted.At operation 1020, an entry for each of the members of the kvset subset that are not in the conflicting key set is added to the new kvset. In an example, operations 1020 and 1015 can operate simultaneously to add an entry to the new kvset.At operation 1025, the kvset subset is replaced with the new kvset by writing the new kvset and removing (eg, deleting, adding, deleting, etc.) the kvset subset.11 is a block diagram illustrating key value compression, in accordance with an embodiment. Key-value compression differs from key compression in its value processing. Key-value compression reads key-value pairs and logical deletes from the merge set, removes obsolete key-value pairs or logical deletes, writes the resulting key-value pairs and logical deletes to one or more new kvsets in the same node, and The node deletes the kvset including the merge set. The new kvset is atomically replaced and logically equivalent to the merge set in terms of content and the kvset in the node from the placement to the oldest to the oldest logical sequence.As illustrated, kvset KVS3, KVS2, and KVS1 include a merge set. The shaded key entries and values will remain in the merge and placed in the new KVS4, and the new KVS4 will be written to the node to replace KVS3, KVS2, and KVS1. Again, as explained above with respect to key compression, key conflicts for keys A and B are resolved to support recent entries. Key-value compression differs from key compression in the removal of unreferenced values. Thus, here, KVS4 is illustrated to consume only the space needed to save its current keys and values.In practice, for example, when keys and values are stored separately in a key block and a value block, KVS4 contains a new key block (the same result as the key compression) and a new value block (which is different from the result of the key compression). By. However, again, key compression does not prevent additional kvsets from being written to the node while key-value compression is being performed, since the added kvset will be logically newer than KVS4 (the result of key-value compression). Therefore, KVS4 is illustrated in the oldest position of the node (eg, on the right).FIG. 12 illustrates an example of a method 1200 for key value compression, in accordance with an embodiment. The operation of method 1200 is implemented using, for example, electronic hardware (e.g., circuitry) as described throughout the application (including FIG. 26 below).At operation 1205, a kvset subset (eg, a merge set) of the kvset sequence from the node is selected. In an example, the kvset subset is a continuous kvset and contains the oldest kvset.At operation 1210, the conflicting key set is located. The members of the conflicting key set include key entries in at least two of the kvset sequences of the node.At operation 1215, the most recent key entry and corresponding value for each member of the conflicting key set is added to the new kvset. In an instance where the node does not have a child node and wherein the merged set contains the oldest kvset, the recent key entry of each member of the conflicting key set is written to the new kvset and will not be in the Writing an entry for each of the members of the kvset subset of the conflict key set to the new kvset includes omitting any key entries that include a tombstone. In an instance where the node does not have a child node and wherein the merged set contains the oldest kvset, the recent key entry of each member of the conflicting key set is written to the new kvset and will not be in the An entry for each of the members of the kvset subset of the conflict key set is written to the new kvset containing any key entries that are omitted.At operation 1220, entries and values for each of the members of the kvset subset that are not in the conflicting key set are added to the new kvset.At operation 1225, the kvset subset is replaced with the new kvset by writing the new kvset (eg, writing to a memory area) and removing the kvset subset.The overflow and boost compression discussed below with respect to Figures 15 through 18 is a form of key-value compression in which the composite kvset is placed in a child or parent node, respectively. Since each traverses the tree and the KVS tree implements a deterministic mapping between the parent node and the child nodes, a brief discussion of this deterministic mapping is presented here before discussing these other compression operations.Figure 13 illustrates an example of an overflow value and its relationship to a tree, in accordance with an embodiment. The deterministic mapping ensures that given a key, it is possible to know which child node the key-value pair will be mapped without considering the content of the KVS tree. The overflow function accepts the key and produces an overflow value corresponding to the deterministic mapping of the KVS tree. In an example, the overflow function accepts both the key and the current tree level and produces an overflow value specific to the parent or child of the key at the tree level.By way of illustration, a simple deterministic map (not illustrated in Figure 13) may include, for example, an alphabetical mapping, where for each key composed of alphabet characters, each tree level is for each letter of the alphabet. A child node is included, and the mapping uses the characters of the key; for example, the first character determines the L1 child node, the second character determines the L2 child node, and so on. Although simple and satisfying the deterministic mapping of the KVS tree, the technique is somewhat affected by stiffness, poor balance in the tree, and lack of control of the tree fan.A better technique is to perform hashing on the keys and specify portions of the hash for each tree level mapping. This ensures that the keys are evenly spread as they traverse the tree (using sufficient hashing techniques) and control the fanout by selecting the size of the hash portion for any given tree level. Moreover, since hashing techniques generally allow configuring the size of the synthesized hash, a sufficient number of bits can be secured, for example, to avoid problems with the simple techniques discussed above, where short words (eg, "described" ) only has enough characters for the three-level tree.FIG. 13 illustrates the result of a key hash having portions 1305, 1310, and 1315 corresponding to L1, L2, and L3 of the tree, respectively. Regarding a given tree hash, the traversal of the tree continues along the dashed lines and nodes. Specifically, starting at root node 1320, portion 1305 directs the traversal to node 1325. Next, portion 1310 directs the traversal to node 1330. The traversal is completed when portion 1315 points to node 1335 at the deepest level of the tree, which is possible based on the size and allocation of the illustrated key hash.In the example, for a given key K, the hash of the key K (or the sub-key of the key K) is referred to as the overflow value of the key K. It should be noted that two different keys can have the same overflow value. When a subkey is employed to generate an overflow value, it is generally desirable to have this condition to achieve a prefix scan or a tombstone as discussed below.In an example, for a given KVS tree, the overflow value for a given key K is constant, and the binary representation of the overflow value includes B bits. In this example, the B bits in the overflow value are numbered 0 to (B-1). Also in this example, the KVS tree is configured such that the nodes at the tree level L all have the same number of child nodes, and the number of such child nodes is an integer power of 2 greater than or equal to 2. In this configuration, the bits of the overflow value for the key K of the key assignment can be used as explained below.For a node at level L in the KVS tree, 2^E(L) is the number of child nodes configured for the node, where 2^E(L)>=2. Next, for a given node in the KVS tree and the given key K, the overflow value of the key K specifies the child nodes of the node for overflow compression as follows:A) Level 0: The overflow value bits 0 to (E(0)–1) specify the number of child nodes of the key K;B) Level 1: The overflow value bits E(0) to (E(0)+E(1)–1) specify the number of child nodes of the key K;C) Level L (L>1): Overflow value bits sum(E(0),...,E(L-1)) to (sum(E(0),...,E(L))–1) specified keys The number of children of K.The table below illustrates a specific example of the above number-based key assignment technique given a KVS tree with seven (7) levels, key K and key K 16-bit overflow values:Level 0 1 2 3 4 5 Child count 2 8 4 16 32 2 Overflow value 0 1-3 4-5 6-9 10-14 15 Key K overflow value 0 110 01 1110 10001 1 Selected child node 0 6 1 14 17 1Where the level is the level number in the KVS tree; the child node count is the number of child nodes configured to specify all nodes at the level; the overflow value bits are the number of overflow value bits used by the overflow compression for key assignments at the specified level ; key K overflow value is a binary representation of a given 16-bit overflow value for a given key K, specifically 0110011110100011 - for clarity, segmentation of the overflow value into a bit used by the overflow compression for key assignments at a specified level; The selected child node is the child node number selected for overflow (for non-obsolete) key-value pairs or tombstones with a given overflow value—this includes all (non-obsolete) key-value pairs with the given key K or Logical deletion, as well as other keys that may have the same overflow value than key K.In the example, for a given KVS tree, the overflow value calculation and the overflow value size (in bits) can be the same for all keys. As described above, the use of sufficient hash grants controls the number of bits in the overflow value while also ensuring, for example, an overflow value size sufficient to accommodate a desired number of tree levels and a desired number of child nodes at each level. In an example, for a given KVS tree, the overflow value of key K can be computed as needed or stored on a storage medium (eg, cached).FIG. 14 illustrates an example of a method 1400 for an overflow value function, in accordance with an embodiment. The operation of method 1400 is implemented using, for example, electronic hardware (e.g., circuitry) as described throughout the application (including FIG. 26 below).At operation 1405, a portion of the key is extracted. In an example, the portion of the key is the entire key.At operation 1410, an overflow value is derived from the portion of the key. In an example, deriving the overflow value from the portion of the key comprises performing a hash of the portion of the key.At operation 1415, a portion of the overflow value is returned based on the tree level of the parent node. In an example, the returning the portion of the overflow value based on the tree level of the parent node includes applying a predetermined allocation to the overflow value and returning the predetermined allocation and the The portion of the overflow value corresponding to the tree level of the parent node. Here, the pre-set allocation definition is applied to the portion of the overflow value of the corresponding level of the tree.In an example, the pre-set allocation defines a maximum number of child nodes of at least some of the tree level levels. In an example, the preset allocation defines a maximum depth of the tree. In an example, the pre-set allocation defines a bit count sequence, each bit counts a specified number of bits, the sequence is ordered from a low tree level to a high tree level, such that the lowest tree level overflow value portion is equal to and over overflow The first bit starting at the beginning of the value counts the number of equal bits and the overflow value portion of the nth tree level is equal to the nth bit count in the bit count sequence, with the first bit count starting with n-1 bits There is an offset in the overflow value of the sum of the bit counts at the end of the count.Figure 15 is a block diagram illustrating overflow compression, in accordance with an embodiment. As described above, overflow compression is a combination of key value compression and tree traversal (to child nodes) to obtain a composite kvset. Therefore, overflow compression (or just overflow) reads key-value pairs and logical deletes from the merge set, removes all stale key-value pairs or tombstones (useless units), writes the resulting key-value pairs and tombstones to the merged set. A new kvset in some or all of the child nodes of the node, and deleting the kvset including the merge set. These new kvsets are atomically replaced and logically equivalent to the merge set.Overflow compression uses a deterministic technique for assigning key-value pairs and logical deletions in a merge set to child nodes of nodes that contain the merge set. In particular, overflow compression may use any such key assignment method such that for a given node and a given key K, overflow compression always writes any (non-obsolete) key-value pairs or tombstones with key K to the stated The same child of the node.In a preferred embodiment, overflow compression uses a root number based key assignment method, such as the key assignment method in the examples presented in detail below.To facilitate understanding of overflow, the parent node contains two kvsets that include a merged set. The pair of key values 1505, 1510, and 1515 in the two kvsets have overflow values of 00X, 01X, and 11X, respectively, which correspond to three of the four child nodes of the parent node. Therefore, the key-value pair 1505 is placed into the new kvset X, the key-value pair 1510 is placed into the new kvset Y, and the key-value pair 1515 is placed into the new kvset Z, where each new kvset is written to correspond to The child node of the overflow value. Also note that the new kvset is written to the most recent (eg, leftmost) location in the corresponding child node.In the example, the merge set used for overflow compression must contain the oldest kvset in the node containing the merge set. In an example, if a node containing a merged set does not have child nodes at the beginning of the overflow compression, then a configured number of child nodes are created.As with the other compressions discussed above, new kvsets can be added to the node containing the merge set for overflow compression while overflow compression is being performed, as by definition these added kvsets will not be in the merge set for overflow compression, And because these added kvsets will be logically newer than the kvset generated by overflow compression.FIG. 16 illustrates an example of a method 1600 for overflow compression, in accordance with an embodiment. The operation of method 1600 is implemented using, for example, electronic hardware (e.g., circuitry) as described throughout the application (including FIG. 26 below).At operation 1605, a subset of the kvset sequence is selected. In an example, the subset contains consecutive kvsets, which also contain the oldest kvset.At operation 1610, a child node map for each key in each kvset of the kvset subset is calculated. Here, the child nodes are mapped to a deterministic mapping from the parent node to the child nodes based on a particular key and a tree level of the parent node.At operation 1615, the keys and corresponding values are collected into the kvset based on the child node map in which each kvset set is mapped to one child node. A key conflict can occur during this collection. As discussed above with respect to Figures 10 and 12, this conflict is resolved to support newer key entries.At operation 1620, the kvset is written to the most recent location in the corresponding kvset sequence in the corresponding child node.At operation 1625, the kvset subset is removed from the root node.Method 1600 can be extended to include performing a second overflow operation on the child node in response to the child node's metric exceeding a threshold after the operation of the overflow operation.Figure 17 is a block diagram illustrating boosting compression, in accordance with an embodiment. The difference between boost compression and overflow compression is that the new kvset is written to the parent node. Therefore, boost compression or just improve the reading of key-value pairs and tombstones from the merged set, remove all obsolete key-value pairs or tombstones, and write the resulting key-value pairs and tombstones to the parent node of the node containing the merged set. The new kvset, and delete the kvset including the merged set. These new kvsets are atomically replaced and logically equivalent to the merge set.Since the kvset in the KVS tree is organized from the root to the leaf of the tree from the latest to the oldest, the boost compression contains the latest kvset in the node containing the merge set and the kvset generated by the boost compression is placed in the parent node of the node. The oldest position in the kvset sequence. Unlike the other compressions discussed above, in order to ensure that the latest kvset from the compressed node is in the merge set, the new kvset cannot be added to the node containing the merge set while the boost compression is being performed. Therefore, boost compression to block compression.As illustrated, the key pairs of KVS 1705 and 1710 are merged into the new KVS M 1715 and stored in the oldest position in the kvset sequence of the parent node. The boost compression can be applied to the merge set when, for example, the goal is to reduce the number of levels in the KVS tree and thus increase the efficiency of searching for keys in the KVS tree.FIG. 18 illustrates an example of a method 1800 for boosting compression, in accordance with an embodiment. The operation of method 1800 is implemented using, for example, electronic hardware (e.g., circuitry) as described throughout the application (including FIG. 26 below). In the example,At operation 1805, key and value compression is performed on the child nodes to generate a new kvset without writing the new kvset to the child nodes.At operation 1810, the new kvset is written to the node in the oldest location of the kvset sequence of the node.Key-value compression, overflow compression, and boost compression operations physically remove obsolete key-value pairs and logical deletes from the merge set and thus reduce the amount of key-value data stored in the KVS tree (for example, in bytes) . In doing so, these compression operations read non-obsolete values from, for example, value blocks in the merge set, and write these values to the value block in the kvset generated by the compression operation.In contrast, key compression operations can physically remove keys (and tombstones) from a merged set but only logically remove values. Therefore, the value is physically maintained in the kvset resulting from key compression. Key compression can increase the efficiency of searching for keys in a node containing a merged set by reducing the number of kvsets in the node while avoiding additional reads of value blocks caused by, for example, key-value compression operations. Write. In addition, key compression provides information useful for future maintenance operations. The KVS tree uniquely supports key compression due to the separation of the keys in the key block and the values in the value block as described above.The KVS tree maintenance techniques (eg, compression) described above are operated when the trigger condition is met. Controlling when and where (eg, which nodes) maintenance occurs can provide an optimized comparison of processing or time spent with increased space or search efficiency. Some of the metrics gathered during maintenance or during the introduction can enhance the system's ability to optimize later maintenance operations. Here, these indicators are called garbage unit indicators or garbage unit indicators that are estimated based on the way the indicators are calculated. Examples of such garbage element metrics include the number of outdated key value pairs in the node and the number of logical deletes or the amount of storage capacity consumed and the amount of storage capacity consumed by unreferenced data in the value block in the node. Such garbage element indicators indicate how many garbage cells can be eliminated by performing, for example, key value compression, overflow compression, or boost compression on the node's kvset.Again, for a given KVS tree, calculating or estimating the garbage index of its nodes provides several advantages, including the following:A) prioritize the use of garbage collection operations (specifically physically removing obsolete key-value pairs and logically deleted garbage collection operations, such as key-value compression, overflow compression, and lifting compression) for the most useless units node. Prioritizing garbage collection operations in this way increases their efficiency and reduces the associated write magnification; orB) Estimate the number of valid key-value pairs and the number of outdated key-value pairs in the KVS tree and the amount of storage capacity consumed by each category. Such estimates are useful in reporting the capacity utilization of KVS trees.In some cases, it may be advantageous to directly calculate the garbage unit metric for a given node in the KVS tree, while in other cases it is advantageous to estimate the garbage unit metric. Therefore, techniques for calculating both the useless unit index and the useless unit index are described below.To facilitate the collection of useless unit indicators, some kvset statistics can be collected or maintained. In the example, these statistics are maintained within the kvset set itself, such as the primary key block header of kvset. The following is a non-exhaustive list of maintainable kvset statistics:A) number of key pairsB) Number of key tombstonesC) store all the keys of the key-value pair and the capacity required for logical deletionD) the capacity required to store all values of the key-value pairE) Key size statistics including minimum, maximum, intermediate, and average valuesF) Value size statistics including minimum, maximum, intermediate, and average valuesG) Count of unreferenced values and capacity consumed by unreferenced values in the case where kvset is the result of key compression.H) Minimum and maximum time to live (TTL) values for any key-value pair. The KVS tree may allow the user to specify a TTL value when storing a key-value pair, and the key-value pair will be removed during the compression operation if it exceeds its lifetime.The calculated garbage unit indicator involves a calculation of a known amount to produce a known result. For example, if there are known n bits that are obsolete in kvset, then key-value compression of kvset will cause the n bits to be released. The indicator source for calculating the useless unit indicator is key compression. Key compression logically removes obsolete key-value pairs and logical deletes from the merge set and physically removes redundant keys. However, unreferenced data can be kept in the value block of kvset generated by key compression. Therefore, key compression causes knowledge of which values are not referenced and their size in the new kvset. Knowing the size of the value permits an accurate count of the memory areas that will be released under other compression. Therefore, when key compression is performed on the merge set in the KVS tree, the garbage indicator of each of the obtained kvsets can be recorded in the corresponding kvset. Example garbage elements that can be maintained from key compression include:A) Count of unreferenced values in kvsetB) Bytes of unreferenced values in kvsetIn an example, a first key compression for the merge set is given, and a second key compression in the same node as the first key compression is given, wherein the merge set for the second key compression comprises compression by the first key The generated kvset can then be added from the useless unit indicator of the first key compression record to the similar garbage unit indicator recorded from the second key compression. For example, if the first key compression operation produces a single kvset S with an associated key compression unit indicator that specifies a Ucnt count of unreferenced values, Ucnt may be included in the key compression garbage unit generated by the second key compression operation. The count of unreferenced values in the indicator.In an example, for a given node in the KVS tree, if the merge set for the key compression operation contains all of the kvsets in the node, then the recorded key compression garbage indicator may include:A) Count of unreferenced values in the nodeB) Bytes of unreferenced values in the nodeObviously, if each kvset in a given node is the result of a key compression operation, then the key compression garbage index of the node is a similar key compression garbage element indicator from each of the individual kvsets in the node. with.The estimated garbage unit indicator provides a value that estimates the gain due to compression performed on the node. In general, the estimated useless unit indicator is collected without performing key compression. The following terms are used in the discussion below. Make:A) T = number of kvsets in a given nodeB) S(j) = kvset in a given node, where S(1) is the oldest kvset and S(T) is the latest kvsetC) Number of key pairs in KVcnt(S(j))=S(j)D) NKVcnt=sum(KVcnt(S(j))), where j is in the range 1 to TE) Kcap(S(j)) = capacity required to store all keys of S(j) in bytesF) NKcap=sum(Kcap(S(j))), where j is in the range 1 to TG) Vcap(S(j)) = capacity required to store all values of S(j) in bytesH) NVcap=sum(Vcap(S(j))), where j is in the range 1 to TI) NKVcap=NKcap+NVcapOne form of estimated useless unit indicator is a historical garbage unit indicator. Historical garbage collection information can be used to estimate the garbage index for a given node in the KVS tree. Examples of this historical garbage collection information include but are not limited to:A) a simple, cumulative or weighted moving average of the scores of the outdated key-value pairs in the previous execution of the garbage collection operation in a given node; orB) A simple, accumulated or weighted moving average of the scores of the outdated key-value pairs in the previous execution of the garbage collection operation in any of the nodes at the same level of the KVS tree as the given node.In the above examples, garbage collection operations include, but are not limited to, key compression, key compression, overflow compression, or boost compression.Given the nodes in the KVS tree, historical garbage collection information and kvset statistics provide information to generate estimated unused unit metrics for the nodes.In an example, a node simple moving average (NodeSMA) can be performed to create a historical garbage indicator. Here, NSMA(E) = the average of the scores of the outdated key-value pairs of the last E executions of the garbage collection operation in a given node, where E is configurable. In this example, the NodeSMA of a given node is estimated to include the following:A) NKVcnt*NSMA(E) count of outdated key-value pairs in the node;B) NKVcap*NSMA(E) bytes of stale key value data in the node;C) the NKVcnt–(NKVcnt*NSMA(E)) count of the valid key-value pair in the node; orD) NKVcap–(NKVcap*NSMA(E)) bytes of valid key value data in the node.Another variation on historical garbage indicators includes the Level Simple Moving Average (LevelSMA) garbage indicator. In this example, let LSMA(E) = the average of the scores of the outdated key-value pairs of the last E executions of the garbage collection operation in any of the nodes at the same level of the KVS tree as the given node, where E is configurable. In this example, the LevelSMA of a given node is estimated to include:A) NKVcnt*LSMA(E) count of outdated key-value pairs in the node;B) NKVcap*LSMA(E) bytes of obsolete key value data in the node;C) NKVcnt–(NKVcnt*LSMA(E)) count of valid key-value pairs in the node; orD) NKVcap–(NKVcap*LSMA(E)) bytes of valid key-value data in the node.The above examples of historical useless unit indicators are not exhaustive, but rather illustrate the types of indicators that have been collected. Other example historical garbage indicators may include a Node Accumulated Moving Average (NodeCMA) garbage unit indicator, a Node Weighted Moving Average (NodeWMA) garbage unit indicator, a Level Accumulated Moving Average (LevelCMA) garbage unit indicator, or a hierarchical weighted movement. Average (LevelWMA) useless unit indicator.Another variation of the estimated garbage index for the Bloom filter in the kvset of the sustain key available for the KVS tree is the Bloom filter garbage indicator. As mentioned above, in the example of a KVS tree, a given kvset contains a Bloom filter to efficiently determine if a kvset can contain a given key, where there is an entry for each key in the kvset in the Bloom filter of kvset . These Bloom filters can be used to estimate the garbage index for a given node in the KVS tree. For a given node in the KVS tree, techniques (eg, as described in Papillon, Odysseus, et al., "Bron filter, cardinality estimation and dynamic length adaptation of decentralized and parallel databases, 201") Can be used to approximate the cardinality of the intersection of the set of keys represented by the Bloom filter in the kvset of the node. This approximated value is referred to herein as the Bronze estimate base of the node.Given a node in the KVS tree, the Bulong estimation base and kvset statistics for the node permit the estimated useless unit metrics for the node to be generated in several ways. The example Bloom filter garbage indicator includes the Bloom incremental garbage indicator. Let NBEC = the Bloom estimated cardinality of T kvsets in a given node, and Fobs = (NKVcnt - NBEC) / NKVcnt, which is an estimate of the score for an outdated key-value pair in a given node. In this example, the Bloom incremental garbage indicator for a given node can include:A) NKVcnt–NBEC count of outdated key-value pairs in the node;B) NKVcap*Fobs bytes of obsolete key value data in the node;C) the NBEC count of the valid key-value pair in the node; orD) NKVcap–(NKVcap*Fobs) bytes of valid key-value data in the node.Probability filters that differ from Bloom filters (where the cardinality of the set of keys that may be approximated by two or more such filters) may be used as an alternative to the Bloom filter in the estimated garbage index By.The calculated and estimated unused unit indicators can be combined to produce a mixed useless unit indicator, i.e., an estimated useless unit indicator that is another form due to the inclusion of another form of estimated useless unit indicator. For example, given a node that includes T kvsets, if the key compression garbage indicator is available for W kvsets in these kvsets and W < T, the mixed garbage indicator of the node can be generated as follows. For W kvsets in a node (where the key compression garbage unit metric is available), let:A) KGMOcnt = an estimate of the count of outdated key-value pairs in W kvsets + a sum of counts of unreferenced values from each of the W kvsets;B) KGMOcap = an estimate of the bytes of the obsolete key value data in the W kvsets + the sum of the bytes of the unreferenced values from each of the W kvsets;C) KGMVcnt = an estimate of the count of valid key-value pairs in W kvsets;D) KGMVcap = an estimate of the bytes of valid key value data in W kvsets.Where the estimated useless unit indicator can be generated using one of the techniques discussed above under the assumption that only Kvsets are only kvsets for the nodes.For (T–W) kvsets in a node (where the key compression garbage unit metric is not available), let:A) EGMOcnt = an estimate of the count of outdated (unwanted) key-value pairs in (T–W) kvsets;B) EGMOcap=estimate the bytes of obsolete (unwanted) key-value data in (T–W) kvsets;C) EGMVcnt = an estimate of the count of valid key-value pairs in (T - W) kvsets;D) EGMVcap = an estimate of the bytes of valid key value data in (T - W) kvsets.These estimated useless unit metrics may be generated using one of the techniques discussed above under the assumption that (T - W) kvsets are only kvsets in the nodes. Given these parameters, the mixed garbage indicator for a given node can include:A) KGMOcnt+EGMOcnt count of outdated key-value pairs in the node;B) KGMOcap+EGMOcap bytes of obsolete key value data in the node;C) the KGMVcnt+EGMVcnt count of the valid key-value pairs in the node; orD) KGMVcap + EGMVcap bytes of valid key value data in the node.The garbage unit metric allows the garbage collection operation to be prioritized for a tree level or node having an amount of garbage that is reasonably sufficient for the garbage collection operation. Prioritizing garbage collection operations in this way increases their efficiency and reduces the associated write magnification. In addition, it is useful to estimate the number of valid key-value pairs in the tree and the number of obsolete key-value pairs and the amount of storage capacity consumed by each category in terms of capacity utilization of the reporting tree.FIG. 19 illustrates an example of a method 1900 for performing maintenance of a KVS tree, in accordance with an embodiment. The operation of method 1900 is implemented using, for example, electronic hardware (e.g., circuitry) as described throughout the application (including FIG. 26 below).At operation 1905, a kvset is created for nodes in the KVS tree. As part of the kvset creation, a kvset indicator set is computed for the kvset. In an example, the set of kvset indicators includes the number of pairs of key values in the kvset. In an example, the kvset indicator set contains the number of tombstones in the kvset. In an example, the kvset indicator set includes a storage capacity to store all key entries for key-value pairs and logical deletions in the kvset. In an example, the kvset indicator set contains storage capacity for all values of the key-value pairs in the kvset.In an example, the kvset indicator set contains key size statistics for keys in the kvset. In an example, the key size statistics include at least one of a maximum value, a minimum value, an intermediate value, or an average value. In an example, the set of kvset indicators includes value size statistics for keys in the kvset. In an example, the value size statistics include at least one of a maximum value, a minimum value, an intermediate value, or an average value.In an example, the kvset indicator set contains a minimum or maximum time to live (TTL) value for a pair of keys in the kvset. TTL can be useful when introducing an operation that specifies that a key-value pair will be a valid period. Therefore, after the key-value pair expires, the main goal is to recycle via a compression operation.In an example, a kvset is created in response to a compression operation. Here, the compression operation is at least one of key compression, key value compression, overflow compression, or boost compression. In an example, the compression operation is a key compression. In this example, the set of kvset indicators may include an indicator of unreferenced values in the kvset due to the key compression. In an example, the unreferenced value indicator includes at least one of a count of unreferenced values or a storage capacity consumed by unreferenced values. As used herein, the amount of memory consumed is measured in units of bits, bytes, blocks, etc., used by the underlying storage device used to hold key entries or values, as appropriate.In instances where a kvset is created by a compression operation, the kvset indicator set may include an estimate of an obsolete key-value pair in kvset. As used herein, the estimate is such that compression only has an in-depth understanding of outdated (eg, obsolete) key-value pairs that are subject to compression, and therefore is unaware of whether or not an entry in a newer kvset that is not part of the compression is passed. It makes the seemingly current key-value pair obsolete. In an example, the estimate of the pair of obsolete key values can be calculated by summing the number of key entries from the pre-compressed kvset that are not included in the kvset. Thus, as part of the compression, the number of obsolete pairs with respect to the merge set will be known and can be used as an estimate of obsolete data in the created kvset. Similarly, an estimate of a valid key-value pair in kvset can be calculated by summing the number of key entries contained in the kvset from the pre-compressed kvset and is part of the kvset indicator set. In an example, the kvset indicator set contains an estimated storage size of outdated key-value pairs in kvset. In an example, the estimated storage size of a valid key-value pair in kvset is included, the estimated storage size of the valid key-value pair being the storage size of the key entry and corresponding value contained in the kvset from the pre-compressed kvset Calculated by summing. These estimates can be used for historical metrics because the estimated obsolete value will be removed from compression unless key compression is performed. However, if a node has normal (eg, historical) performance in compression, then this performance can be assumed to continue in the future.In an example, the kvset indicator set is stored in a kvset (eg, in a primary key block header). In an example, the kvset indicator set is stored in a node instead of a kvset. In an example, a subset of the kvset metrics is stored in a kvset and a second subset of the kvset metrics is stored in the node.At operation 1910, kvset is added to the node. In general, once added to a node, it is also written to kvset (for example, written to disk storage).At operation 1915, a node is selected for the compression operation based on the metrics in the kvset metric set. Thus, the kvset indicator or the node metrics discussed below or both can be facilitated by a garbage collector or similar tree maintenance process. In an example, selecting the node for the compressing operation comprises: collecting a set of kvset metrics for a plurality of nodes; sorting the plurality of nodes based on the set of kvset metrics; and selecting based on a sort order from the sorting A subset of the plurality of nodes. In this example, operation 1920 can be implemented such that performing the compression operation on the node includes performing the compression operation on each node in a subset of the plurality of nodes (including the node). In an example, the cardinality of the subset of the plurality of nodes is set by a performance value. In an example, the performance value is an efficiency of performing the compression as measured by the recovered space. This can typically be implemented as a threshold. In an example, a threshold function can be used that accepts several parameters (eg, the amount of unused storage capacity left on the underlying storage device and an estimate of the capacity to be recovered in the compression operation) to determine whether The decision to perform a given compression operation.At operation 1920, a compression operation is performed on the node. In an example, a compression operation type (eg, key compression, key compression, overflow compression, or boost compression) is selected based on the metrics in the kvset metric set.The operations of method 1900 can be extended to include modifying node metrics in response to adding the kvset to the node. In an example, the node indicator includes a value that is subject to a score of an estimated outdated key value pair in a previously compressed kvset performed on a group of nodes comprising the node. In the example, the value is a simple average. In the example, the value is a moving average. In an example, the value is a weighted average. In an example, the value is an average of the scores of the estimated outdated key value pairs in the kvset that were subjected to the set number of times of the most recent previous compression for the node. In an example, the value is an average of the scores of the estimated outdated key value pairs in the set number of most recently compressed kvsets that are subjected to a set number of times for all nodes at the tree level of the node.In an example, a group of nodes contains only the nodes. In an example, the group of nodes includes all nodes at the tree level of the node. In an example, the node metrics comprise a sum of similar metrics in the set of kvset metrics generated by the compression operation and previous kvset metrics resulting from compression operations performed on the nodes.In an example, the node indicator comprises an estimated number of identical keys in the kvset and different kvsets of the node. In an example, the estimated number of keys is calculated by obtaining a first bond bloom filter from the kvset; obtaining a second bond bloom filter from the different kvset; The first key bloom filter and the second bond bloom filter perform an intersection operation to generate a base Bloom filter estimated base (NBEC). Although this example is written as being between two kvsets (eg, the intersection of only two Bloom filters from two kvsets), any number of kvset Bloom filters can be interleaved to arrive at NBEC. The NBEC represents an estimate of the number of keys common to all kvsets whose Bloom filter is part of the intersection.In an example, the node indicator includes subtracting the NBEC from the NKVcnt value to estimate the number of outdated key value pairs in the node. Here, the NKVcnt value is a total count of key-value pairs in each kvset of a node in which the Bloom filter performs an intersection operation for generating the NBEC. In an example, the node indicator includes multiplying the NKVcap value by the Fobs value. Here, the NKVcap value is a total storage capacity used by keys and values in each of the kvsets in which the Bloom filter performs an intersection operation for generating the NBEC, and the Fobs value a result of subtracting the NBEC from the NKVcnt value and dividing by NKVcnt, wherein the NKVcnt value is a key value pair in each kvset of the node in which the Bloom filter is interleaved for generating the NBEC The total count.In an example, the node indicator is stored in the node. Here, the node metrics are stored along with node metrics from other nodes. In an example, the node metrics are stored in a tree hierarchy that is common to all nodes in the hierarchy of the KVS tree.The garbage collection metrics and their use to improve KVS tree performance described above may be assisted in several ways by modifying the normal operation of the KVS tree or elements therein (eg, tombstones) under certain circumstances. An instance may include a tombstone acceleration, an update tombstone, a prefix tombstone, or an immutable data KVS tree.Tombstones represent deleted key values in the KVS tree. When the tombstone is compressed in the leaves of the KVS tree and the compression contains the oldest kvset in the leaf, it is actually removed, but otherwise the possible outdated values of the blocked key are returned in the search. . When key compression or key-value compression produces a tombstone in a merged set on a node with child nodes, the tombstone acceleration includes writing non-obsolete logical deletes to these children following a key assignment method for overflow compression in the KVS tree. One or more new kvsets in some or all of the child nodes.If the merge set used for key compression or key-value compression operations contains the oldest kvset in the node containing the merge set, then the acceleration logical delete (if present) does not need to be included in the new kvset created in the node by the compression operation. in. Otherwise, if the merge set for the key compression or key-value compression operation does not contain the oldest kvset of the nodes containing the merge set, then the acceleration logical delete (if present) is also included in the node by the compression operation Created in a new kvset. Assigning the acceleration logical deletion to the older region of the KVS tree facilitates garbage collection by allowing the removal of key-value pairs in the child node without waiting for the original logical deletion to be pushed to the child node.Key compression or key compression operations may apply prescribed or calculated criteria to determine whether to perform a tombstone acceleration. Examples of such a tombstone acceleration criterion include, but are not limited to, the number of non-obsolete tombstones in the merge set and the amount of key-valued data that can be logically deleted by logical deletion of the known or estimated by the merged set (for example, in bytes) For the unit).Update tombstones operate in a manner similar to accelerated logical deletion, although the original incoming values are not tombstoned. Essentially, when new values are added to the KVS tree, garbage collection can be performed on all older values of the keys. Pushing a logical delete similar to the acceleration logically deleted down the tree will allow compression of these child nodes to remove outdated values.In the example, in the KVS tree, the import operation adds a new kvset to the root node, and the key-value pair with the key K in this new kvset contains the following flag or other indicator: it is included in the earlier import operation for the replacement An updated key-value pair with a key-value pair of key K. This indicator is expected but not required to be accurate. If the update key value pair with the key K is included with the import operation, and if the root node has the child node, then the import operation may also follow the key assignment method for overflow compression in the KVS tree and the key of the key K The tombstone (update tombstone) is written to the new kvset in the child node of the root node.In an example, or in response to processing an updated key-value pair having a key K, a key compression or key-value compression operation on the merged set in the root node may also follow a key assignment method for overflow compression in the KVS tree. Key K logical deletion (again known as update tombstone) is written to the new kvset in the child node of the root node. In an example, for a given updated key-value pair with a key K, at least one corresponding update logical deletion is written for the key K.Although the KVS tree prefix operation is discussed below with respect to FIG. 25, the concepts can also be used in tombstones. In the prefix operation, a part of the key (prefix) is used for matching. In general, the prefix portion of the key is all used to create an overflow value, although a smaller portion can be used with the darker tree determination to fan out to all child nodes after consuming the prefix path. Prefix logical deletion uses the power of a prefix that matches multiple values such that a single entry represents the deletion of many key-value pairs.In an example, the overflow compression uses a key assignment method based on the overflow value of the first subkey of the key, the first subkey being a key prefix. The prefix is logically deleted as a logical record that includes the key prefix and indicates that all keys starting with the prefix and its associated value (if present) have been logically deleted from the KVS tree at a particular point in time. Prefixed logical deletion is used in the KVS tree for the same purpose as key logical deletion, except that prefix logical deletion can logically delete more than one valid key-value pair, except that key logical deletion can logically delete exactly one valid key-value pair. In this example, since overflow compression uses the first subkey value specified by the prefix to generate an overflow value for the prefix tombstone, each key value pair, key tombstone, or prefix tombstone with the equivalent first subkey value will Take the same path through the hierarchy of the KVS tree as it will have an equivalent overflow value.In an example, the tombstone acceleration can be applied to prefix tombstones and key tombstones. The prefix logical deletion can be handled differently than the key logical deletion when applying the logical deletion acceleration criterion, because the prefix logical deletion can cause a large number of outdated key value pairs or logical deletion physical removal in subsequent garbage collection operations.The tombstone acceleration techniques discussed above result in creating a greater number of kvsets and thus can be inefficient. Since the application writing the data can know the size of the previously written data, the tombstone can contain the size of the data it replaces according to the application. This information can be used by the system to determine whether to perform the tombstone acceleration discussed above (or to generate an update tombstone).Some data can be immutable. Some examples of immutable key-value data include time series data, log data, sensor data, machine-generated data, and database extraction, output of transform and load (ETL) processes, and others. In an example, the KVS tree can be configured to store non-variable key value data. In this configuration, the kvset that is expected but not required to be added to the KVS tree by the import operation does not contain a tombstone.In an example, the KVS tree can be configured to store a certain amount of immutable data that is only limited by the capacity of the storage medium containing the KVS tree. In this configuration of the KVS tree, only the garbage collection operations performed are key compression. Here, key compression is performed to increase the efficiency of searching for keys in the KVS tree by reducing the number of kvsets in the root node. Note that without overflow compression, the root node will be the only node in the KVS tree. In an example, the compression criteria may include the number of kvsets in the root node or key search time statistics, such as minimum, maximum, average, and average search time. These statistics may be reset at a particular event, such as after key compression, after an incoming operation, when a configured time interval expires, or after a configured number of secondary key searches are performed. In an example, a merge set for key compression may include some or all of the kvsets in the root node.In an example, the KVS tree can be configured to store a certain amount of immutable data defined by retention criteria that can be enforced by removing key-value pairs from the KVS tree in a first in first out (FIFO) manner. Examples of this retention criterion include: the maximum count of key-value pairs in the KVS tree; the maximum byte of key-value data in the KVS tree; or the maximum age of key-value pairs in the KVS tree.In this configuration of the KVS tree, only the garbage collection operations performed are key compression. Here, key compression is performed to increase the efficiency of searching for keys in the KVS tree (by reducing the number of kvsets in the root node) and to facilitate the removal of key-value pairs from the KVS tree in a FIFO manner to enforce retention criteria. In an example, the compression criteria may specify that key compression is performed whenever two or more consecutive kvsets (including merged sets for key compression) in the root node satisfy a configured score of a retention criterion called a reserved increment. Below are some examples of retention requirements:A) If the retention criterion is W key-value pairs in the KVS tree and the retention increment is 0.10*W key-value pairs, then two or more consecutive kvsets (merged sets) have a combined 0/10* Perform key compression in the case of a key-value pair of W counts;B) if the retention criterion is X bytes of the key value data in the KVS tree, and the retention increment is 0.20*X bytes of the key value data, then there are two or more consecutive kvsets (merged sets) Perform key compression with a combination of 0.20*X bytes of key value data; orC) If the retention criterion is Y days of the key value data in the KVS tree, and the retention increment is 0.15*Y days of the key value data, then two or more consecutive kvsets (merged sets) have a combination of 0.15* Key compression is performed in the case of Y-day key value data.There may be situations where a merge set required for key compression accurately satisfies a configured retention increment as impractical. Thus, in an example, an approximation of the retention increments can be used.Given a sequence of incoming operations for the KVS tree and kvsets each below the configured retention increment, a key compression operation as described above is performed to generate kvsets in the root node that each satisfy or approximate the retention increment. An exception to this result may be the latest kvset, which may be combined below the retention increment. Regardless of this possible outcome, the oldest kvset in the KVS tree can be deleted whenever the KVS tree exceeds the retention criteria by at least the retention increment. For example, if the retention criteria are W key-value pairs in the KVS tree and the configured retention increment is 0.10*W key-value pairs, then the kvsets in the root node of the KVS tree will each have approximately 0.10*W A key-value pair in which a possible exception to the combined latest kvset may have less than 0.10*W key-value pairs. As a result, the oldest kvset in the KVS tree can be deleted whenever the KVS tree exceeds W key-value pairs by at least 0.10*W key-value pairs.The garbage collection acceleration of the tombstone acceleration, update acceleration, or prefix tombstone can be applied to other key value storage areas other than the KVS tree. For example, a tombstone acceleration or update tombstone may be applied to an LSM tree variant by one or more garbage collection operations that write key value data to the same tree level ( The key value data is read therefrom and operates in a manner similar to key compression or key value compression in the KVS tree. Update tombstones can also be applied to LSM tree variants, where it is permitted to introduce tombstones into the child nodes of the root node. In another example, prefix logical deletion can be used in an LSM tree variant having only one node per level (this is common) or implementing a key assignment method for a part based on a key (eg, a child) Key) and select the child node. In another example, the tombstone size can be applied to the LSM tree variant using a tombstone acceleration. Furthermore, techniques for optimizing garbage collection of immutable key value data can be applied to LSM trees by garbage collection operations (similar to key compression in KVS trees) that do not read or write values in key value data. Variants.Implementing these garbage collection facilitators can improve the efficiency of garbage collection in KVS trees or several data structures. For example, the tombstone acceleration causes writing of a tombstone to a lower level of the tree that would otherwise occur before applying key compression, key compression, or the like, thereby making it possible at all levels of the tree. Eliminate useless units more quickly. The tombstone acceleration used in conjunction with key compression or the like achieves these results with the write magnification being much less than the write magnification that would result from overflow compression. In other instances, the prefix tombstone allows a single tombstone record to logically delete a large number of related key-value pairs, and the update tombstone brings the benefit of the tombstone acceleration to the updated key-value pair, which is improved when evaluating the logically-deleted acceleration criterion. The technique for optimizing the garbage collection of the invariable key value data produces a write magnification of one (1) of the values in the key value data.FIG. 20 illustrates an example of a method 2000 for modifying a KVS tree operation, in accordance with an embodiment. The operation of method 2000 is implemented using, for example, electronic hardware (e.g., circuitry) as described throughout the application (including FIG. 26 below). Method 2000 encompasses the operation of implementing several features discussed above with respect to tombstone acceleration, update acceleration (e.g., update tombstone), prefix tombstone, and non-variable key value data in a KVS tree.At operation 2005, a request for a KVS tree is received. In an example, the request includes a key prefix having a member that defines the tombstone as a prefix tombstone in the request, and the request to perform the request to the KVS tree includes The prefix logically deletes the kvset written to the KVS tree. In an example, when a KVS tree operation that compares keys, the prefix logical delete matches any key having the same prefix as the key prefix of the prefix logical deletion.In an example, the request includes a key, the set of parameters includes a member specifying a logical delete acceleration; and executing the request to the KVS tree includes writing a logical delete to be specified by performing an overflow function on the key At least one of the child nodes. The overflow function is a function that treats a key (or a portion of a key) as an input and produces an overflow value, as mentioned above with respect to FIG. In an example, the tombstone is written to all existing child nodes specified by performing the overflow function on the key. In an example, the request includes a tombstone. In an example, the request contains a value.At operation 2010, a parameter set of the KVS tree is received.At operation 2015, the request for the KVS tree is performed by modifying the operation of the KVS tree according to parameters.In an example, the request includes a key, a tombstone, and a storage size of a value corresponding to the key in the KVS tree. Here, the parameter set has a member that specifies a garbage collection statistics storage area, and executing the request for the KVS tree includes storing the key and the storage size in a data structure of the KVS tree. in. In an example, the tombstone is a prefix tombstone.In an example, the parameter set includes a member that specifies that the KVS tree is immutable, and performing the request for the KVS tree includes writing the request to a root node of the KVS tree. Here, the root node is the only node in the KVS tree when the KVS tree is immutable.In an example, the KVS tree exclusively uses key compression when the KVS tree is immutable. In an example, method 2000 can be extended to store key search statistics in response to the KVS tree being immutable. In an example, the key search statistic is at least one of a minimum, a maximum, an average, or an average search time. In an example, the key search statistic is the number of kvsets in the root node.In an example, when the KVS tree is immutable, method 2000 can be extended to perform key compression in response to the key search statistics satisfying a threshold. In an example, the key compression can include resetting the key search statistics in response to at least one of: compressing, introducing, after a specified number of searches, or after a specified time interval.In an instance in which the second member of the parameter set specifies that the KVS tree removes elements on a first in first out basis, a third member of the parameter set specifies a retention constraint of the KVS tree, the KVS tree being based on The retention constraint performs key compression on kvset, and the KVS tree removes the oldest kvset when the retention constraint is violated. In an example, the retention constraint is the maximum number of key-value pairs. In an example, the retention constraint is the maximum age of the key-value pair. In an example, the retention constraint is the largest stored value consumed by the key-value pair.In an example, performing key compression on the kvset based on the retention constraint comprises: grouping consecutive kvsets to generate a group set, the summation metric from each member of the group set approximating a score of the retention constraint; And performing key compression on each member of the group set.21 is a block diagram illustrating a key search, in accordance with an embodiment. The search progresses by starting at the latest kvset in the root node and gradually moving to the older kvset until the key is found or the oldest kvset in the leaf node does not have the key. Due to the deterministic nature of the parent node to child node key mapping, there will be only one leaf searched, and the oldest kvset in the leaf will have the oldest key entry. Thus, if the search path is illustrated and the key is not found, then the key is not in the KVS tree.The search for the latest key entry for the key stops as soon as it is found. Therefore, the search path is stopped as soon as the search path is moved from the latest to the oldest key. This behavior allows the immutability of kvset to be maintained by not requiring immediate removal of outdated key-value pairs from the KVS tree. Alternatively, a newer value or a tombstone to indicate deletion is placed in a newer kvset and will be found first, thereby producing an accurate response to the query regardless of the older key pair version still resident in the KVS tree. .In an example, the search for key K can be performed by setting the current node to the root node. If a key-value pair with a key K or a logical deletion is found in the current node, the search is completed and an indication of the associated value or "key not found" is returned as a result. If the key K is not found, the current node is set to the child node of the node as determined by the key K and the key assignment method for overflow compression.If no such child nodes exist, the search is completed and the "key not found" indication is the result. Otherwise, a search for the key K in the kvset of the current node is performed and the process is repeated. Conceptually, a search for a key K in a KVS tree follows each key-value pair with a key K or logically deletes the same path taken through the KVS tree due to overflow compression.Due to the deterministic mapping between the key-based parent node and the child node, only one node per level in the KVS tree is searched until a key-value pair with a key K is found or logically deleted or searched for the last in the KVS tree (eg, Up to the number of nodes in the hierarchy. Therefore, the search is highly efficient.FIG. 22 illustrates an example of a method 2200 for performing a key search, in accordance with an embodiment. The operation of method 2200 is implemented using, for example, electronic hardware (e.g., circuitry) as described throughout the application (including FIG. 26 below).At operation 2205, a search request containing a key is received.At operation 2210, the root node is selected as the current node.At operation 2215, the current node is checked.At operation 2220, the check begins with a query for the latest kvset of the current node.At decision 2225, if the key is not found, then method 2200 proceeds to decision 2240, and otherwise, if the key is found, then method 2200 proceeds to decision 2230.At decision 2230, if the key entry corresponding to the key contains or references a logical delete, then method 2200 proceeds to result 2260 and otherwise proceeds to result 2235.At result 2235, the search request is answered and the value corresponding to the most recent key entry of the key is returned.At decision 2240, if there are more kvsets in the current node, then method 2200 proceeds to operation 2245 and otherwise proceeds to decision 2250.At operation 2245, method 2200 selects the next most recent kvset in the current node to query the key and proceeds to decision 2225.At decision 2250, if the current node does not have any child nodes that match the key's overflow function, then method 2200 proceeds to result 2260 and otherwise proceeds to operation 2255 otherwise.At operation 2255, the child node that matches the overflow function of the key is set to the current node and method 2200 proceeds to operation 2215.At result 2260, a negative indication of the search, such as "key not found", is returned in response to the search request.The scan operation is different from the search among the multiple keys being sought. A typical scan operation can include a search for a range of keys, wherein the search defines a plurality of keys to define the range. In general, the results of the calibration criteria are specified and all keys in the kvs tree that satisfy the criteria are expected.23 is a block diagram illustrating key scans, in accordance with an embodiment. A key scan or a pure scan identifies each kvset in each node of the KVS tree that contains key entries that satisfy the scan criteria (eg, within the specified range). While kvset's key storage allows for efficient searching of specific keys, to ensure that each key that meets the scanning criteria is found, each kvset is searched. However, due to the keyed sorting nature of the key store in kvset, scanning can be quickly determined without looking at each key. This is still better than the capabilities provided by the WB tree, for example, because key-value pairs are not stored in the keyed sort structure, but instead the keys are held to resolve key hash collisions. Therefore, each key in the WB tree must be read to satisfy the scan.In the KVS tree, to facilitate scanning, keys are stored in kvset in key sort order. Thus, a given key can be located in the log time and the keys within the range (eg, the highest and lowest keys in the range) can also be quickly determined. Moreover, the example kvset metadata discussed above with respect to Figures 1 through 5 can be used to further speed up scanning. For example, if kvset maintains the minimum and maximum key values contained within kvset, then the scan can quickly determine that the key in kvset does not meet the specified range. Similarly, a Bloom filter that maintains the kvset key can be used to quickly determine that a particular key is not in the key storage area of a given kvset.In the example (not illustrated), in addition to the above, the scan can continue as much as the search, except for accessing each node. Thus, the scan reads the most recent record of each key that satisfies the criteria from the kvset, where the most recent record of the given key K can be a key-value pair or a key tombstone. As mentioned above, within a given node in the KVS tree, kvset is ordered from the most recent to the oldest, and the kvset in the node at level (L+1) is older than the kvset in the node at level L. . After finding the key that satisfies the criteria, the key is passed back to the requester in the result set.A similar search-like scan as described directly above may be improved when it is recognized that access to each kvset in each node occurs during scanning. Thus, in an example, the kvset can be read simultaneously. Simultaneous reading of all kvsets can result in very large buffers (eg, storage locations for returning results). However, this can be mitigated by quickly determining whether a given kvset has the ability to satisfy a scan criterion (eg, within range). Thus, each kvset can be accessed, but only the kvset with keys that satisfy the criteria is read. This example is illustrated in FIG. In particular, the reader simultaneously accesses all kvsets in the kvset (eg, dashed lines and dashed lines kvset) and, however, only reads a subset of the kvsets (dashed line kvset). This technique supports iterator style semantics where the program can ask for the next or previous key. The ordered nature of the keys in the kvset permits rapid identification of the next key and whether there is a conflict on the key (eg, multiple entries for the same key), which value is the most recent that will be passed back to the program - unless the latest value For logical deletion, in this case the iterator should skip the key and provide the latest value for the next key.In an example, scanning may include receiving a scan request that includes a range of keys (or other criteria).The scanning continues by collecting the keys specified by the range from each kvset of the set of nodes from the tree to the found set. In an example, the set of nodes includes each node in the tree.The scanning continues by reducing the found set to a result set by maintaining a key-value pair corresponding to the most recent entry that is not a logically deleted key.The scan is done by passing back the result set.24 is a block diagram illustrating key scans, in accordance with an embodiment. Figure 24 provides a different perspective than Figure 23. The criterion for scanning is the bond between A and K (including A and K). The scan begins with the latest kvset of the root node, which is the latest kvset in the KVS tree, kvset 12. In an example, the key metric of kvset 12 allows at least some of the keys to satisfy the rapid determination of the criteria. Specifically, in this example, it is the keys A and B. Scanning continues from the top (root) to the bottom (leaf) of the KVS tree and from the latest kvset in each node to the oldest kvset. Note that the keys A, B, C, E, and K appear across multiple nodes in multiple kvsets. The scan will only keep the latest one of each kvset (for example, the selected key). Thus, the result set will contain the values of these keys found in kvset 12 for keys A and B, kvset 11 for key C, kvset 10 for key E, and kvset 6 for key K. However, if a key entry in these kvsets for any of these keys contains or references a logical delete, the key will be omitted from the result set. The uniqueness of the key D in Kvset 5 is such that its value is included in the result set (assuming that the key D does not refer to a logical deletion).Figure 25 is a block diagram illustrating a prefix scan, in accordance with an embodiment. The prefix scan locates all key-value pairs (if any) in the KVS tree, where the keys all start with the specified prefix. Although the prefix is less than the entire key, and thus multiple keys can be matched, the prefix portion of the key is at least as large as the portion of the key used by the overflow function to create the overflow value. Thus, if the overflow function uses the first subkey of the key, the prefix contains the first subkey (and can contain additional subkeys). This requirement allows deterministic mapping to improve prefix scan performance over pure scan performance because only the nodes in the path of the prefix are accessed.In the example, the overflow value is based on the first subkey of the key. In this example, the prefix is specified to contain the value of the first subkey of the key. In this example, the prefix scan can proceed by identifying each kvset that contains a key-value pair or a logical delete with a key starting with a specified prefix in each node of the KVS tree. Compared to a pure scan, the prefix scan does not access every node of the KVS tree. More specifically, the verified node may be limited to the node along the path determined by the overflow value of the first sub-key value defining the prefix. In an example, instead of using the first subkey, the last subkey can be used for the overflow value to implement the prefix scan. In this example, the prefix is specified to contain the value of the last child key of the key. Additional various scans can be implemented based on the particular subkeys used in the overflow value calculation.Again, like a pure scan, there are multiple ways to retrieve a key or key-value pair to perform a scan. In an example, as illustrated, simultaneous access (dotted line) of nodes along the overflow value path given by the prefix (nodes with dashed edges), the kvset within the node is tested for keys that satisfy the scan criteria, and Read the passed kvset (kvset with dashed edges).Prefix scanning is extremely efficient, since both the number of nodes examined is limited to one level per level of the KVS tree, and the keys in the kvset key storage area are typically stored in a ready-to-identify structure of keys that allow matching prefixes. Additionally, the kvset metrics discussed above with respect to key scans can also help speed up the search.The prefix scan can include receiving a scan request with a key prefix. Here, the set of nodes to be searched includes each node corresponding to the key prefix. In an example, the node correspondence with the key prefix is determined by a portion of the overflow value derived from the key prefix, the portion of the overflow value being determined by a tree hierarchy of a given node.The prefix scan continues by collecting the keys specified by the prefix from each kvset of the set of nodes from the tree into the found set.The prefix scan continues by reducing the found set to a result set by maintaining a key-value pair corresponding to the most recent entry of the key that was not logically deleted and not deleted by more recent logical deletion.The prefix scan is done by passing back the result set.As described above, the KVS tree provides a powerful structure for storing key-value data on disk. The KVS tree contains many of the advantages of LSM trees and WB trees without the disadvantages of these structures. For example, with regard to storage space or write magnification due to compression, in a KVS tree, the size of the nodes can be easily controlled to limit the maximum amount of temporary storage capacity for compression. In addition, key compression can be used to increase the search efficiency in a node without reading and writing a block of values, thereby reducing the read magnification and write magnification due to compression. In a conventional LSM tree, the amount of temporary storage capacity required for compression and the amount of read magnification and write magnification can be proportional to the amount of key capacity at the compressed tree level - this is exacerbated by the fact The tree-level key-value capacity in the LSM tree is typically configured to grow exponentially at each tree level deeper in the tree.Regarding key search efficiency, in the KVS tree, searching for the key K involves searching only one node per tree level level (which represents only a small fraction of the total keys in the KVS tree). In a traditional LSM tree, searching for the key K requires searching for all keys in each level.Regarding the prefix scan efficiency as described above, an instance of the KVS tree permits all keys starting with a prescribed prefix to be found by searching only one node per tree hierarchy (which represents only a small fraction of the total keys in the KVS tree). In a traditional LSM tree, finding all the keys starting with the specified prefix requires searching for all the keys in each level.Regarding scanning efficiency, the example of the KVS tree described above permits all keys that start in a given range or start with a prescribed prefix to be found by utilizing the data in the kvset. In a WB tree, the keys are unordered so that no efficient way to implement any of these operations is produced. Therefore, in the WB tree, each entry of the tree must be retrieved and tested to perform these scans.Regarding compression performance, in the KVS tree, key, key-value, and overflow compression maintenance techniques (except for boost compression) are non-blocking due to the sorted nature of the kvset in the node. Thus, a new kvset can be added to the node, performing key, key or overflow compression on the node by simply placing the new kvset in the most recent location. In the WB tree, compression is a blocking operation.FIG. 26 illustrates a block diagram of an example machine 2600 upon which any one or more of the techniques (eg, methods) discussed herein may be performed. In an alternate embodiment, machine 2600 can operate as a standalone device or can be connected (eg, networked) to other machines. In a networked deployment, machine 2600 can operate as a server machine, a client machine, or both in a server-client network environment. In an example, machine 2600 can be used as a peer machine in a peer-to-peer (P2P) (or other decentralized) network environment. Machine 2600 can be a personal computer (PC), a tablet PC, a set top box (STB), a personal digital assistant (PDA), a mobile phone, a web appliance, a network router, a switch, or a bridge or capable of performing actions that are prescribed to be taken by the machine. Any machine that commands (sequential or otherwise). Moreover, although only a single machine is illustrated, the term "machine" should also be taken to include the execution of a set of instructions (or sets of instructions) individually or jointly to perform any one or more of the methods discussed herein. Any collection of machines, such as cloud computing, software as a service (SaaS), other computer cluster configurations.An example as described herein may comprise or be operable by the logic or several components or mechanisms. A circuit is a collection of circuits implemented in a tangible entity (eg, simple circuit, gate, logic, etc.) that includes hardware. Circuit members can be flexible over time. A circuit includes members that can perform specified operations, either individually or in combination, during operation. In an example, the hardware of the circuit can be designed in an immutable manner to perform a particular operation (eg, hardwired). In an example, the hardware of the circuit may include physical components (eg, execution units, transistors, simple circuits, etc.) that are connected in a variable manner, including physically modified (eg, magnetically, electrically, and invariant) A moveable placement of particles, etc.) a computer readable medium that encodes instructions for a particular operation. When connecting physical components, the basic electrical properties of the hardware composition, for example, change from insulator to conductor or vice versa. The instructions enable embedded hardware (eg, an execution unit or loading mechanism) to form members of the circuitry in hardware via a variable connection to implement portions of a particular operation while in operation. Accordingly, a computer readable medium is communicatively coupled to other components of the circuit while the device is in operation. In an example, any of the physical components can be used in more than one member of more than one circuit. For example, in operation, the execution unit can be used in a first sub-circuit of the first circuit at one point in time and reused by a second sub-circuit in the first circuit at different times, or by a second circuit The third sub-circuit is used.A machine (eg, computer system) 2600 can include a hardware processor 2602 (eg, a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof), a primary memory 2604, and a static memory 2606, where Some or all of these may be in communication with one another via an interconnect link (e.g., bus) 2608. Machine 2600 can further include a display unit 2610, an alphanumeric input device 2612 (eg, a keyboard), and a user interface (UI) navigation device 2614 (eg, a mouse). In an example, display unit 2610, input device 2612, and UI navigation device 2614 can be touch screen displays. Machine 2600 can additionally include a storage device (eg, driver unit) 2616, a signal generation device 2618 (eg, a speaker), a network interface device 2620, and one or more sensors 2621, such as a global positioning system (GPS) sensor, a compass, an accelerometer Or other sensors. Machine 2600 can include an output controller 2628, such as a serial (eg, universal serial bus (USB), parallel or other wired or wireless (eg, infrared (IR), near infrared communication (NFC), etc.) connection to one or more Peripheral devices (eg, printers, card readers, etc.) communicate or control the one or more peripheral devices.Storage device 2616 can include one or more data structures having stored thereon or utilized by any one or more of the techniques or functions described herein or by any one or more of the techniques or functions described herein. Or a machine readable medium 2622 of instructions 2624 (eg, software). The instructions 2624 may also reside entirely or at least partially within the main memory 2604, within the static memory 2606, or within the hardware processor 2602 during their execution by the machine 2600. In an example, one or any combination of hardware processor 2602, primary memory 2604, static memory 2606, or storage device 2616 can constitute a machine-readable medium.Although machine-readable medium 2622 is illustrated as a single medium, the term "machine-readable medium" can include a single medium or multiple media (eg, a centralized or decentralized database) configured to store one or more instructions 2624, And/or associated cache and server).The term "machine-readable medium" can include instructions capable of storing, encoding, or carrying any one or more of the techniques executed by machine 2600 and causing machine 2600 to perform the present invention or can store, encode, or carry such instructions. Any media that uses or is associated with a data structure of such instructions. Non-limiting machine readable medium examples can include solid state memory as well as optical and magnetic media. In an example, a large-scale machine-readable medium includes a machine-readable medium comprising a plurality of particles having a constant (eg, static) quality. Therefore, large-scale machine-readable media does not temporarily propagate signals. Specific examples of large-scale machine-readable media can include non-volatile memory, such as semiconductor memory devices (eg, electrically programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM)), and Flash memory devices; magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.The instructions 2624 can be further transmitted or received via the network interface device 2620 using the transmission medium via the communication network 2626, which utilizes several transmission protocols (eg, Frame Relay, Internet Protocol (IP), Transmission Control Protocol (TCP), user Any of Datagram Protocol (UDP), Hypertext Transfer Protocol (HTTP), and the like. An example communication network can include a local area network (LAN), a wide area network (WAN), a packet data network (eg, the Internet), a mobile telephone network (eg, a cellular network), a plain old telephone (POTS) network, and a wireless data network (eg, The Institute of Electrical and Electronics Engineers (IEEE) 802.11 series of standards called, the IEEE 802.16 series of standards called), the IEEE 802.15.4 series of standards, peer-to-peer (P2P) networks, and other networks. In an example, network interface device 2620 can include one or more physical outlets (eg, an Ethernet, coaxial, or headphone jack) or one or more antennas to connect to communication network 2626. In an example, network interface device 2620 can include multiple antennas to communicate wirelessly using at least one of single input multiple output (SIMO), multiple input multiple output (MIMO), or multiple input single output (MISO) technology. The term "transmission medium" shall be taken to include any intangible medium capable of storing, encoding or carrying instructions for execution by machine 2600, and including digital or analog communication signals or other intangible media to facilitate communication of the software.Additional instructions and examplesExample 1 is a system comprising processing circuitry configured to: receive a notification of a KVS tree write request to a multi-stream storage device, the notification including a KVS corresponding to data in the write request a tree range; assigning a stream identifier (ID) to the write request based on the KVS tree range and a stability value of the write request; and returning the stream ID to manage the write request Flow assignment that modifies a write operation of the multi-stream storage device.In the example 2, the subject matter of example 1, wherein the KVS tree range comprises at least one of: a kvset ID corresponding to a kvset of the data; and a KVS tree corresponding to the data a node ID corresponding to the node; a hierarchy ID corresponding to a tree hierarchy corresponding to the data; a tree ID of the KVS tree; a forest ID corresponding to a forest to which the KVS tree belongs; or a type corresponding to the data .In the example 3, the subject matter of example 2, wherein the type is a key block type or a value block type.In the example 4, the subject matter of any one or more of examples 1 to 3, wherein the notification comprises a device ID of the multi-stream device.In the example 5, the subject matter of example 4, wherein the notification comprises a last write request corresponding to writing a kvset identified by the kvset ID to a write request sequence of the multi-stream storage device WLAST logo.The subject matter of any one or more of examples 1 to 5, wherein the processing circuit is configured to assign the stability value based on the KVS tree range.In the example 7, the subject matter of example 6, wherein the stability value is one of a predefined set of stability values.In the example 8, the subject matter of example 7, wherein the predefined set of stability values comprises HOT, WARM, and COLD, wherein HOT indicates a minimum expected life of the data on the multi-stream storage device and COLD Indicates the highest expected life of the data on the multi-stream storage device.The subject matter of any one or more of examples 6 to 8, wherein to assign the stability value, the processing circuit is configured to use a portion of the KVS tree range from a data structure Position the stability value.In the example 10, the subject matter of example 9, wherein the portion of the KVS tree range comprises a tree ID of the data.In the example 11, the subject matter of example 10, wherein the portion of the KVS tree range comprises a hierarchical ID of the data.In the example 12, the subject matter of example 11, wherein the portion of the KVS tree range comprises a node ID of the data.In the example 13, the subject matter of any one or more of examples 9 to 12, wherein the portion of the KVS tree range comprises a hierarchical ID of the data.In the example 14, the subject matter of any one or more of examples 9 to 13, wherein the portion of the KVS tree range comprises a type of the data.The subject matter of any one or more of examples 6 to 14, wherein to assign the stability value, the processing circuit is configured to maintain a frequency of stability value assignments for a level ID a set, each member of the set of frequencies corresponding to a unique level ID; a frequency corresponding to a level ID in the KVS tree range is retrieved from the set of frequencies; and a stability value and a frequency range based on the frequency The mapping selects the stability value.In the example 16, the subject matter of any one or more of examples 1 to 15, wherein the stream ID is assigned based on the KVS tree range and the stability value of the write request Having the write request, the processing circuit is configured to: create a flow range value in accordance with the KVS tree range; perform a lookup in the selected flow data structure using the flow range value; and from the selected flow data The structure returns a stream ID corresponding to the stream range.The subject matter of example 16, wherein the performing the performing is performed in the selected stream data structure, the processing circuit configured to: fail to find in the selected stream data structure The flow range value; performing a lookup on the available flow data structure using the stability value; receiving a result of the lookup including the flow ID; and adding an entry to the selected flow data structure, the entry including the The stream ID, the stream range value, and a timestamp of the time when the entry was added.In the example 18, the subject matter of example 17, wherein the plurality of entries of the available stream data structure correspond to the stability value, and wherein the result of the lookup is from the plurality of entries At least one of a loop or a random selection of an entry.The subject matter of any one or more of examples 17 to 18, wherein the processing circuit is further configured to initialize the available stream data structure, the initializing comprising the processing circuit being configured Obtaining: a plurality of streams obtainable from the multi-stream storage device; obtaining stream IDs of all streams obtainable from the multi-stream storage device, each stream ID being unique; adding a stream ID to a stability value group And creating, in the available stream data structure, a record for each stream ID, the record including the stream ID, a device ID of the multi-stream storage device, and a stability value group corresponding to the stream ID Stability value.In the example 20, the subject matter of any one or more of examples 16 to 19, wherein the flow range value comprises a tree ID of the data.In the example 21, the subject matter of example 20, wherein the stream range value comprises a level ID of the data.In the example 22, the subject matter of example 21, wherein the flow range value comprises a node ID of the data.In the example 23, the subject matter of any one or more of examples 20 to 22, wherein the flow range value comprises a kvset ID of the data.In the example 24, the subject matter of any one or more of examples 16 to 23, wherein the flow range value comprises a hierarchical ID of the data.In the example 25, the subject matter of any one or more of examples 16 to 24, wherein the processing is performed in the selected stream data structure, the processing circuit configured to: fail to The stream range value is found in the selected stream data structure; the stream ID is located from the selected stream data structure or the available stream data structure based on the content of the selected stream data structure; and an entry is created to the A stream data structure is selected, the entry containing the stream ID, the stream range value, and a timestamp of when the entry was added.The object of claim 25, wherein the stream ID is located from the selected stream data structure or the available stream data structure based on content of the selected stream data structure, the processing circuit Configuring to compare a first number of entries from the selected stream data structure with a second number of entries from the available stream data structure to determine the first number of entries and the second number The number of entries is equal; a group of entries corresponding to the stability value is located from the selected stream data structure; and a stream ID of the entry with the oldest timestamp in the group of entries is returned.In the example 27, the subject matter of any one or more of examples 25 to 26, wherein the selected stream data structure or the available stream data structure is located based on content of the selected stream data structure The flow ID, the processing circuit configured to: compare a first number of entries from the selected stream data structure with a second number of entries from the available stream data structure to determine the first The number of entries is not equal to the number of the second entries; performing a lookup on the available stream data structures using the stability values and stream IDs in the entries of the selected stream data structure; receiving includes not included Selecting a result of the lookup of the flow ID in the entry of the flow data structure; and adding an entry to the selected flow data structure, the entry containing the flow ID, the flow range value, and adding The timestamp of the time when the entry was made.The subject matter of any one or more of examples 16 to 27, wherein the stream ID corresponding to the stream range is returned from the selected stream data structure, the process The circuitry is configured to update a timestamp of an entry in the selected stream data structure corresponding to the stream ID.The subject matter of any one or more of examples 16 to 28, wherein the write request includes a WLAST flag, and wherein the stream is passed back from the selected stream data structure with the stream The stream ID corresponding to the range, the processing circuit configured to remove an entry corresponding to the stream ID from the selected stream data structure.The subject matter of any one or more of examples 16 to 29, wherein the processing circuit is further configured to remove an entry having a timestamp that exceeds a threshold from the selected stream data structure. .Example 31 is at least one machine-readable medium comprising, when executed by a machine, causing the machine to execute an operation comprising: receiving a notification of a KVS tree write request to a multi-stream storage device, the notification comprising a KVS tree range corresponding to the data in the write request; assigning a stream identifier (ID) to the write request based on the KVS tree range and the stability value of the write request; and returning The flow ID is to manage a flow assignment to the write request, the flow assignment modifying a write operation of the multi-stream storage device.In the example 32, the subject matter of example 31, wherein the KVS tree range comprises at least one of: a kvset ID corresponding to a kvset of the data; and a KVS tree corresponding to the data a node ID corresponding to the node; a hierarchy ID corresponding to a tree hierarchy corresponding to the data; a tree ID of the KVS tree; a forest ID corresponding to a forest to which the KVS tree belongs; or a type corresponding to the data .In the example 33, the subject matter of example 32, wherein the type is a key block type or a value block type.In the example 34, the subject matter of any one or more of examples 31 to 33, wherein the notification comprises a device ID of the multi-stream device.In the example 35, the subject matter of example 34, wherein the notification comprises a last write request corresponding to writing a kvset identified by the kvset ID to a write request sequence of the multi-stream storage device WLAST logo.The subject matter of any one or more of examples 31 to 35, wherein the operation comprises assigning the stability value based on the KVS tree range.The subject matter of example 36, wherein the stability value is one of a predefined set of stability values.The subject matter of example 37, wherein the predefined set of stability values comprises HOT, WARM, and COLD, wherein HOT indicates a minimum expected life of the data on the multi-stream storage device and COLD Indicates the highest expected life of the data on the multi-stream storage device.The subject matter of any one or more of examples 36 to 38, wherein assigning the stability value comprises locating the stability value from a data structure using a portion of the KVS tree range.In the example 40, the subject matter of example 39, wherein the portion of the KVS tree range comprises a tree ID of the data.In the example 41, the subject matter of example 40, wherein the portion of the KVS tree range comprises a hierarchical ID of the data.In the example 42, the subject matter of example 41, wherein the portion of the KVS tree range comprises a node ID of the data.In the example 43, the subject matter of any one or more of examples 39 to 42, wherein the portion of the KVS tree range comprises a hierarchical ID of the data.In the example 44, the subject matter of any one or more of examples 39 to 43, wherein the portion of the KVS tree range comprises a type of the data.In the example 45, the subject matter of any one or more of examples 36 to 44, wherein assigning the stability value comprises: maintaining a set of frequencies assigned to a level ID for a level ID, each of the set of frequencies A member corresponds to a unique level ID; a frequency corresponding to the level ID in the KVS tree range is retrieved from the frequency set; and a stability value is selected from a mapping of the stability value to the frequency range based on the frequency.In the example 46, the subject matter of any one or more of examples 31 to 45, wherein the flow ID is assigned to the stability value based on the KVS tree range and the write request The write request includes: creating a flow range value in accordance with the KVS tree range; performing a lookup in the selected flow data structure using the flow range value; and returning from the selected flow data structure to the flow range Corresponding stream ID.In the example 47, the subject matter of example 46, wherein performing the finding in the selected stream data structure comprises: failing to find the stream range value in the selected stream data structure; The stability value performs a lookup on the available stream data structure; receives the result of the lookup including the stream ID; and adds an entry to the selected stream data structure, the entry including the stream ID, the stream range value And the timestamp of the time when the entry was added.In the example 48, the subject matter of example 47, wherein the plurality of entries of the available stream data structure correspond to the stability value, and wherein the result of the lookup is from the plurality of entries At least one of a loop or a random selection of an entry.The subject matter of any one or more of examples 47 to 48, wherein the operation comprises initializing the available stream data structure by obtaining a obtainable from the multi-stream storage device a plurality of streams; obtaining stream IDs of all streams obtainable from the multi-stream storage device, each stream ID being unique; adding a stream ID to the stability value group; and creating a target in the available stream data structure For each record of the top-level ID, the record includes the stream ID, the device ID of the multi-stream storage device, and a stability value corresponding to the stability value group of the stream ID.In the example 50, the subject matter of any one or more of examples 46 to 49, wherein the stream range value comprises a tree ID of the data.In the example 51, the subject matter of example 50, wherein the stream range value comprises a level ID of the data.In the example 52, the subject matter of example 51, wherein the stream range value comprises a node ID of the data.In the example 53, the subject matter of any one or more of examples 50 to 52, wherein the flow range value comprises a kvset ID of the data.In the example 54, the subject matter of any one or more of examples 46 to 53, wherein the flow range value comprises a hierarchical ID of the data.The subject matter of any one or more of examples 46 to 54 wherein the performing the finding in the selected stream data structure comprises: failing in the selected stream data structure Finding the stream range value; locating a stream ID from the selected stream data structure or an available stream data structure based on content of the selected stream data structure; and creating an entry to the selected stream data structure, The entry contains the stream ID, the stream range value, and a timestamp of when the entry was added.In the example 56, the subject matter of example 55, wherein locating the stream ID from the selected stream data structure or the available stream data structure based on content of the selected stream data structure comprises: Comparing the number of first entries of the selected stream data structure with the number of second entries from the available stream data structure to determine that the number of first entries is equal to the number of second entries; The selected stream data structure locates a group of entries corresponding to the stability value; and returns a stream ID of the entry in the group of entries having the oldest timestamp.The subject matter of any one or more of examples 55 to 56, wherein the selected stream data structure or the available stream data structure is located based on the content of the selected stream data structure The flow ID includes comparing a first number of entries from the selected flow data structure with a second number of entries from the available flow data structure to determine the first number of entries and the second The number of entries is not equal; performing a lookup on the available stream data structure using the stability value and stream ID in an entry of the selected stream data structure; receiving the inclusion of the data structure not included in the selected stream a result of the lookup of the flow ID in the entry; and adding an entry to the selected flow data structure, the entry containing the flow ID, the flow range value, and the time of the time when the entry was added stamp.The subject matter of any one or more of examples 46 to 57, wherein returning the stream ID corresponding to the stream range from the selected stream data structure comprises updating the selection The timestamp of the entry in the constant stream data structure corresponding to the stream ID.The subject matter of any one or more of examples 46 to 58, wherein the write request includes a WLAST flag, and wherein the stream range is returned from the selected stream data structure The corresponding stream ID includes removing an entry corresponding to the stream ID from the selected stream data structure.In the example 60, the subject matter of any one or more of examples 46 to 59, wherein the operation comprises removing an entry having a timestamp that exceeds a threshold from the selected stream data structure.Example 61 is a machine-implemented method, comprising: receiving a notification of a KVS tree write request to a multi-stream storage device, the notification including a KVS tree range corresponding to data in the write request; Assigning a stream identifier (ID) to the write request with a KVS tree scope and a stability value of the write request; and returning the stream ID to manage a flow assignment to the write request, The flow assignment modifies the write operation of the multi-stream storage device.In the example 62, the subject matter of example 61, wherein the KVS tree range comprises at least one of: a kvset ID corresponding to a kvset of the data; and a KVS tree corresponding to the data a node ID corresponding to the node; a hierarchy ID corresponding to a tree hierarchy corresponding to the data; a tree ID of the KVS tree; a forest ID corresponding to a forest to which the KVS tree belongs; or a type corresponding to the data .In the example 63, the subject matter of example 62, wherein the type is a key block type or a value block type.In the example 64, the subject matter of any one or more of examples 61 to 63, wherein the notification comprises a device ID of the multi-stream device.In the example 65, the subject matter of example 64, wherein the notification comprises a last write request corresponding to a write request sequence in which the kvset identified by the kvset ID is written to the multi-stream storage device WLAST logo.In Example 66, the subject matter of any one or more of Examples 61-65 optionally includes assigning the stability value based on the KVS tree range.In the example 67, the subject matter of example 66, wherein the stability value is one of a predefined set of stability values.In the example 68, the subject matter of example 67, wherein the predefined set of stability values comprises HOT, WARM, and COLD, wherein HOT indicates a minimum expected life of the data on the multi-stream storage device and COLD Indicates the highest expected life of the data on the multi-stream storage device.In the example 69, the subject matter of any one or more of examples 66 to 68, wherein assigning the stability value comprises locating the stability value from a data structure using a portion of the KVS tree range.In the example 70, the subject matter of example 69, wherein the portion of the KVS tree range comprises a tree ID of the data.In the example 71, the subject matter of example 70, wherein the portion of the KVS tree range comprises a hierarchical ID of the data.In the example 72, the subject matter of example 71, wherein the portion of the KVS tree range comprises a node ID of the data.In the example 73, the subject matter of any one or more of examples 69 to 72, wherein the portion of the KVS tree range comprises a hierarchical ID of the data.In the example 74, the subject matter of any one or more of examples 69 to 73, wherein the portion of the KVS tree range comprises a type of the data.In the example 75, the subject matter of any one or more of examples 66 to 74, wherein assigning the stability value comprises: maintaining a set of frequencies assigned to a level ID for a level ID, each of the set of frequencies A member corresponds to a unique level ID; a frequency corresponding to the level ID in the KVS tree range is retrieved from the frequency set; and a stability value is selected from a mapping of the stability value to the frequency range based on the frequency.In the example 76, the subject matter of any one or more of examples 61 to 75, wherein the stream ID is assigned to the KVS tree range and the stability value of the write request The write request includes: creating a flow range value in accordance with the KVS tree range; performing a lookup in the selected flow data structure using the flow range value; and returning from the selected flow data structure to the flow range Corresponding stream ID.The subject matter of example 76, wherein performing the finding in the selected stream data structure comprises: failing to find the stream range value in the selected stream data structure; The stability value performs a lookup on the available stream data structure; receives the result of the lookup including the stream ID; and adds an entry to the selected stream data structure, the entry including the stream ID, the stream range value And the timestamp of the time when the entry was added.In the example 78, the subject matter of example 77, wherein the plurality of entries of the available stream data structure correspond to the stability value, and wherein the result of the lookup is from the plurality of entries At least one of a loop or a random selection of an entry.In Example 79, the subject matter of any one or more of Examples 77-78 optionally includes initializing the available stream data structure by obtaining a plurality of obtainable from the multi-stream storage device a stream; obtaining a stream ID of all streams obtainable from the multi-stream storage device, each stream ID being unique; adding a stream ID to the stability value group; and creating a stream for each stream in the available stream data structure A record of the ID, the record including the stream ID, a device ID of the multi-stream storage device, and a stability value corresponding to the stability value group of the stream ID.In the example 80, the subject matter of any one or more of examples 76 to 79, wherein the flow range value comprises a tree ID of the data.In the example 81, the subject matter of example 80, wherein the stream range value comprises a level ID of the data.In the example 82, the subject matter of example 81, wherein the flow range value comprises a node ID of the data.In the example 83, the subject matter of any one or more of examples 80 to 82, wherein the flow range value comprises a kvset ID of the data.In the example 84, the subject matter of any one or more of examples 76 to 83, wherein the flow range value comprises a hierarchical ID of the data.In the example 85, the subject matter of any one or more of examples 76 to 84, wherein performing the finding in the selected stream data structure comprises: failing in the selected stream data structure Finding the stream range value; locating a stream ID from the selected stream data structure or an available stream data structure based on content of the selected stream data structure; and creating an entry to the selected stream data structure, The entry contains the stream ID, the stream range value, and a timestamp of when the entry was added.In the example 86, the subject matter of example 85, wherein locating the stream ID from the selected stream data structure or the available stream data structure based on content of the selected stream data structure comprises: Comparing the number of first entries of the selected stream data structure with the number of second entries from the available stream data structure to determine that the number of first entries is equal to the number of second entries; The selected stream data structure locates a group of entries corresponding to the stability value; and returns a stream ID of the entry in the group of entries having the oldest timestamp.In the example 87, the subject matter of any one or more of examples 85 to 86, wherein the selected stream data structure or the available stream data structure is located based on the content of the selected stream data structure The flow ID includes comparing a first number of entries from the selected flow data structure with a second number of entries from the available flow data structure to determine the first number of entries and the second The number of entries is not equal; performing a lookup on the available stream data structure using the stability value and stream ID in an entry of the selected stream data structure; receiving the inclusion of the data structure not included in the selected stream a result of the lookup of the flow ID in the entry; and adding an entry to the selected flow data structure, the entry containing the flow ID, the flow range value, and the time of the time when the entry was added stamp.In the example 88, the subject matter of any one or more of examples 76 to 87, wherein returning the stream ID corresponding to the stream range from the selected stream data structure comprises updating the selection The timestamp of the entry in the constant stream data structure corresponding to the stream ID.The subject matter of any one or more of examples 76 to 88, wherein the write request includes a WLAST flag, and wherein the stream range is passed back from the selected stream data structure The corresponding stream ID includes removing an entry corresponding to the stream ID from the selected stream data structure.In Example 90, the subject matter of any one or more of Examples 76-89 optionally includes removing an entry having a timestamp that exceeds a threshold from the selected stream data structure.Example 91 is a system comprising: means for receiving a notification of a KVS tree write request to a multi-stream storage device, the notification including a KVS tree range corresponding to data in the write request; Assigning a stream identifier (ID) to the component of the write request based on the KVS tree scope and the stability value of the write request; and for returning the stream ID to manage the write A component of the requested flow assignment that modifies a write operation of the multi-stream storage device.In the example 92, the subject matter of example 91, wherein the KVS tree range comprises at least one of: a kvset ID corresponding to a kvset of the data; and a KVS tree corresponding to the data a node ID corresponding to the node; a hierarchy ID corresponding to a tree hierarchy corresponding to the data; a tree ID of the KVS tree; a forest ID corresponding to a forest to which the KVS tree belongs; or a type corresponding to the data .In the example 93, the subject matter of example 92, wherein the type is a key block type or a value block type.In the example 94, the subject matter of any one or more of examples 91 to 93, wherein the notification comprises a device ID of the multi-stream device.In the example 95, the subject matter of example 94, wherein the notification comprises a last write request corresponding to writing a kvset identified by the kvset ID to a write request sequence of the multi-stream storage device WLAST logo.In Example 96, the subject matter of any one or more of Examples 91-95 optionally includes means for assigning the stability value based on the KVS tree range.In the example 97, the subject matter of example 96, wherein the stability value is one of a predefined set of stability values.In the example 98, the subject matter of example 97, wherein the predefined set of stability values comprises HOT, WARM, and COLD, wherein HOT indicates a minimum expected life of the data on the multi-stream storage device and COLD Indicates the highest expected life of the data on the multi-stream storage device.In the example 99, the subject matter of any one or more of examples 96 to 98, wherein assigning the stability value comprises locating the stability value from a data structure using a portion of the KVS tree range.In the example 100, the subject matter of example 99, wherein the portion of the KVS tree range comprises a tree ID of the data.In the example 101, the subject matter of example 100, wherein the portion of the KVS tree range comprises a hierarchical ID of the data.In the example 102, the subject matter of example 101, wherein the portion of the KVS tree range comprises a node ID of the data.In the example 103, the subject matter of any one or more of examples 99 to 102, wherein the portion of the KVS tree range comprises a hierarchical ID of the data.In the example 104, the subject matter of any one or more of examples 99 to 103, wherein the portion of the KVS tree range comprises a type of the data.In the example 105, the subject matter of any one or more of examples 96 to 104, wherein assigning the stability value comprises: maintaining a set of frequencies assigned to a level ID for a level ID, each of the set of frequencies A member corresponds to a unique level ID; a frequency corresponding to the level ID in the KVS tree range is retrieved from the frequency set; and a stability value is selected from a mapping of the stability value to the frequency range based on the frequency.In the example 106, the subject matter of any one or more of examples 91 to 105, wherein the flow ID is assigned to the KVS tree range and the stability value of the write request The write request includes: creating a flow range value in accordance with the KVS tree range; performing a lookup in the selected flow data structure using the flow range value; and returning from the selected flow data structure to the flow range Corresponding stream ID.In the example 107, the subject matter of example 106, wherein performing the finding in the selected stream data structure comprises: failing to find the stream range value in the selected stream data structure; The stability value performs a lookup on the available stream data structure; receives the result of the lookup including the stream ID; and adds an entry to the selected stream data structure, the entry including the stream ID, the stream range value And the timestamp of the time when the entry was added.In the example 108, the subject matter of example 107, wherein the plurality of entries of the available stream data structure correspond to the stability value, and wherein the result of the lookup is from the plurality of entries At least one of a loop or a random selection of an entry.In the example 109, the subject matter of any one or more of the examples 107 to 108 optionally includes means for initializing the available stream data structure by obtaining: from the multi-stream storage device a plurality of streams obtained; obtaining stream IDs of all streams obtainable from the multi-stream storage device, each stream ID being unique; adding a stream ID to a stability value group; and in the available stream data structure A record is created for each flow ID, the record including the flow ID, a device ID of the multi-stream storage device, and a stability value corresponding to the stability value group of the flow ID.In the example 110, the subject matter of any one or more of examples 106 to 109, wherein the flow range value comprises a tree ID of the data.In the example 111, the subject matter of example 110, wherein the stream range value comprises a level ID of the data.In the example 112, the subject matter of example 111, wherein the flow range value comprises a node ID of the data.In the example 113, the subject matter of any one or more of examples 110 to 112, wherein the stream range value comprises a kvset ID of the data.In the example 114, the subject matter of any one or more of examples 106 to 113, wherein the flow range value comprises a hierarchical ID of the data.In the example 115, the subject matter of any one or more of examples 106 to 114, wherein performing the finding in the selected stream data structure comprises: failing in the selected stream data structure Finding the stream range value; locating a stream ID from the selected stream data structure or an available stream data structure based on content of the selected stream data structure; and creating an entry to the selected stream data structure, The entry contains the stream ID, the stream range value, and a timestamp of when the entry was added.In the example 116, the subject matter of example 115, wherein locating the stream ID from the selected stream data structure or the available stream data structure based on content of the selected stream data structure comprises: Comparing the number of first entries of the selected stream data structure with the number of second entries from the available stream data structure to determine that the number of first entries is equal to the number of second entries; The selected stream data structure locates a group of entries corresponding to the stability value; and returns a stream ID of the entry in the group of entries having the oldest timestamp.In the example 117, the subject matter of any one or more of examples 115 to 116, wherein the selected stream data structure or the available stream data structure is located based on the content of the selected stream data structure The flow ID includes comparing a first number of entries from the selected flow data structure with a second number of entries from the available flow data structure to determine the first number of entries and the second The number of entries is not equal; performing a lookup on the available stream data structure using the stability value and stream ID in an entry of the selected stream data structure; receiving the inclusion of the data structure not included in the selected stream a result of the lookup of the flow ID in the entry; and adding an entry to the selected flow data structure, the entry containing the flow ID, the flow range value, and the time of the time when the entry was added stamp.In the example 118, the subject matter of any one or more of examples 106 to 117, wherein returning the stream ID corresponding to the stream range from the selected stream data structure comprises updating the selection The timestamp of the entry in the constant stream data structure corresponding to the stream ID.In the example 119, the subject matter of any one or more of examples 106 to 118, wherein the write request includes a WLAST flag, and wherein the stream range is returned from the selected stream data structure The corresponding stream ID includes removing an entry corresponding to the stream ID from the selected stream data structure.In Example 120, the subject matter of any one or more of Examples 106-119 optionally includes means for removing an entry having a timestamp that exceeds a threshold from the selected stream data structure.The above detailed description contains references to the accompanying drawings that form a part of the detailed description. The drawings show specific embodiments that may be practiced. These embodiments are also referred to herein as "examples." In addition to the elements shown or described, these examples may also include several elements. However, the inventors also contemplate that only examples of the elements shown or described are provided therein. Furthermore, the inventors contemplate the use of the elements shown or described with respect to particular examples (or one or more aspects thereof) or with respect to other examples (or one or more aspects thereof) shown or described herein. Any combination or arrangement of instances (or one or more aspects thereof).All publications, patents, and patent documents mentioned in this specification are hereby incorporated by reference in their entirety in their entirety in their entirety herein In the event of inconsistency between the use of this document and the documents so incorporated by reference, the use in the incorporated references should be considered as a supplement to the use of the document; for irreconcilable inconsistencies, The use in this document shall prevail.In this document, as commonly found in patent documents, the term "a" or "an" is used to include one or more, independent of "at least one" or "one or more" (one or more). Any other examples or use. In this document, the use of the term "or" means non-exclusive or such that "A or B" includes "A but not B", "B but not A" and "A and B" unless otherwise Instructions. In the appended claims, the terms "including" and "in" are used as the ordinary English equivalents of the corresponding terms "comprising" and "wherein". Furthermore, in the following claims, the terms "including" and "comprising" are in the open, that is, in the claims, in addition to the elements listed after the term. A system, device, article or process of several components is still considered to be within the scope of the appended claims. Moreover, in the appended claims, the terms "first," "second," and "third" are used merely as labels, and are not intended to impose a numerical requirement on the subject.The above description is intended to be illustrative, and not restrictive. For example, the examples described above (or one or more aspects thereof) can be used in combination with one another. For example, those skilled in the art can use other embodiments based on reviewing the above description. The Abstract is provided to allow the reader to quickly determine the nature of the technical disclosure and is based on the following understanding: it is not intended to limit or limit the scope or meaning of the claims. Moreover, in the above embodiments, various features may be grouped together to simplify the invention. This should be construed as an unanticipated feature that is not claimed to be essential to any claim. Rather, the inventive subject matter may lie in less than all features of a particular disclosed embodiment. Therefore, the following claims are hereby incorporated into the claims, and each of the claims The scope of the embodiments should be determined with reference to the appended claims and the full scope of the equivalents |
Solid state transducers with state detection, and associated systems and methods are disclosed. A solid state transducer system in accordance with a particular embodiment includes a support substrate and a solid state emitter carried by the support substrate. The solid state emitter can include a first semiconductor component, a second semiconductor component, and an active region between the first and second semiconductor components. The system can further include a state device carried by the support substrate and positioned to detect a state of the solid state emitter and/or an electrical path of which the solid state emitter forms a part. The state device can be formed from at least one state-sensing component having a composition different than that of the first semiconductor component, the second semiconductor component, and the active region. The state device and the solid state emitter can be stacked along a common axis. In further particular embodiments, the state-sensing component can include an electrostatic discharge protection device, a thermal sensor, or a photosensor. |
A solid state transducer system, comprising:a support substrate;a solid state emitter carried by the support substrate, the solid state emitter comprising a first semiconductor component, a second semiconductor component, and an active region between the first and second semiconductor components;a state device carried by the support substrate and positioned to detect a state of the solid state emitter, wherein the state device is formed from at least one state-sensing component having a composition different than that of the first semiconductor component, the second semiconductor component, and the active region, and wherein the state device and the solid state emitter are stacked along a common axis; anda controller operatively coupled to the solid state emitter and the state device to receive a signal from the state device and control the solid state emitter based at least in part on the signal received from the state device.The solid state transducer system of claim 1 wherein the support substrate has a first side and a second side, wherein the solid state emitter is over the first side of the support substrate, and further comprising:a first via extending through the support substrate to the first semiconductor component of the solid state emitter, the first via having an electrically conductive material that defines a first emitter contact at the second side of the support substrate;a second via extending through the support substrate to the second semiconductor component of the solid state emitter, the second via having an electrically conductive material that defines a second emitter contact at the second side of the support substrate.The solid state transducer system of claim 2 wherein the state device includes a photosensor positioned to receive radiation emitted by the solid state emitter, and wherein the signal corresponds to a characteristic of the radiation.The solid state transducer system of claim 3, further comprising a reflective material positioned between the solid state emitter and the photosensor to reflect radiation emitted by the solid state emitter, wherein the reflective material includes an aperture positioned between the active region and the photosensor to pass radiation from the active region to the photosensor.The solid state transducer system of claim 4 wherein the reflective material is conductive.The solid state transducer system of claim 1, further comprising an electrically conductive reflective material configured to reflect radiation emitted by the solid state emitter, wherein the electrically conductive reflective material is positioned such that a plane coplanar with the electrically conductive reflective material is between the state device and the solid state emitter.The solid state transducer system of claim 1 wherein the state device is a photosensor positioned (a) over a surface of the solid state emitter facing away from the support substrate and (b) to receive radiation emitted by the solid state emitter through a transparent material positioned between the photosensor and the surface of the solid state emitter facing away from the support substrate, and further comprising:a first emitter contact electrically connected to the first semiconductor component;a second emitter contact on the surface of the solid state emitter facing away from the support substrate and electrically connected to the second semiconductor component, wherein the second emitter contact is laterally spaced apart from the photosensor on the surface of the solid state emitter facing away from the support substrate.The solid state transducer system of claim 1 wherein the state device includes a thermal sensor positioned to receive thermal energy produced by the solid state emitter, and wherein the signal corresponds to a temperature.The solid state transducer system of claim 8, further comprising a power source, wherein- the power source is electrically coupled to and provides electric power to the solid state emitter, andthe controller is operatively coupled to the power source and configured to (a) decrease the power provided to the solid state emitter when the signal indicates a first temperature and (b) increase the power provided to the solid state emitter when the signal indicates a second temperature, lower than the first temperature.The solid state transducer system of claim 8, further comprising a reflective material positioned between the solid state emitter and the thermal sensor to reflect radiation emitted by the solid state emitter.The solid state transducer system of claim 8 wherein the thermal sensor includes a serpentine thermistor element, and wherein an impedance of the thermistor element changes as a function of temperature.The solid state transducer system of claim 1, further comprising a power source, wherein the power source is electrically coupled to and provides electric power to the solid state emitter, wherein the controller is operatively coupled to the power source, and wherein the controller is configured to control the solid state emitter at least in part by controlling the power source to control the power provided to the solid state emitter.The solid state transducer system of claim 12 wherein- the state device is a photosensor,the signal corresponds to an output level of the radiation,the controller is configured to increase the power provided to the solid state emitter when the signal indicates that the radiation output level of the solid state emitter is below a predefined low output level, andthe controller is configured to decrease the power provided to the solid state emitter when the signal indicates that the radiation output level of the solid state emitter is above a predefined high output level.The solid state transducer system of claim 1 wherein the solid state emitter, the state device, and the support substrate form a single die, wherein the support substrate is the only support substrate of the die, and wherein the state device is formed from a plurality of materials disposed conformally and sequentially on the solid state emitter.The solid state transducer system of claim 1, further comprising:first and second emitter contacts, the first emitter contact electrically connected to the first semiconductor component, the second emitter contact electrically connected to the second semiconductor component; andfirst and second state device contacts connected to the state device, the emitter contacts being addressable separately from the state device contacts. |
TECHNICAL FIELDThe present technology is directed generally to solid state transducers ("SSTs" including transducers having integrated state detection devices and functions, and associated systems and methods.BACKGROUNDSolid state lighting ("SSL") devices are used in a wide variety of products and applications. For example, mobile phones, personal digital assistants ("PDAs"), digital cameras, MP3 players, and other portable electronic devices utilize SSL devices for backlighting. SSL devices are also used for signage, indoor lighting, outdoor lighting, and other types of general illumination. SSL devices generally use light emitting diodes ("LEDs"), organic light emitting diodes ("OLEDs"), and/or polymer light emitting diodes ("PLEDs") as sources of illumination, rather than electrical filaments, plasma, or gas. Figure 1A is a cross-sectional view of a conventional SSL device 10a with lateral contacts. As shown in Figure 1A , the SSL device 10a includes a substrate 20 carrying an LED structure 11 having an active region 14, e.g., containing gallium nitride/indium gallium nitride (GaN/InGaN) multiple quantum wells ("MQWs"), positioned between N-type GaN 15 and P-type GaN 16. The SSL device 10a also includes a first contact 17 on the P-type GaN 16 and a second contact 19 on the N-type GaN 15. The first contact 17 typically includes a transparent and conductive material (e.g., indium tin oxide ("ITO")) to allow light to escape from the LED structure 11. In operation, electrical power is provided to the SSL device 10a via the contacts 17, 19, causing the active region 14 to emit light.Figure 1B is a cross-sectional view of another conventional LED device 10b in which the first and second contacts 17 and 19 are opposite each other, e.g., in a vertical rather than lateral configuration. During formation of the LED device 10b, a growth substrate, similar to the substrate 20 shown in Figure 1A , initially carries an N-type GaN 15, an active region 14 and a P-type GaN 16. The first contact 17 is disposed on the P-type GaN 16, and a carrier 21 is attached to the first contact 17. The substrate is removed, allowing the second contact 19 to be disposed on the N-type GaN 15. The structure is then inverted to produce the orientation shown in Figure 1B . In the LED device 10b, the first contact 17 typically includes a reflective and conductive material (e.g., silver or aluminum) to direct light toward the N-type GaN 15.One aspect of the LEDs shown in Figures 1A and 1B is that an electrostatic discharge ("ESD") event can cause catastrophic damage to the LED, and render the LED inoperable. Accordingly, it is desirable to reduce the effects of ESD events. However, conventional approaches for mitigating the effects of ESD typically include connecting a protection diode to the SST device, which requires additional connection steps and can compromise the electrical integrity of the resulting structure. Another aspect of the LEDs shown in Figures 1A and 1B is that the performance levels of the devices may vary due to internal heating, drive current, device age and/or environmental effects. Accordingly, there remains a need for reliably and cost-effectively manufacturing LEDs with suitable protection against ESD and other performance-degrading factors.BRIEF DESCRIPTION OF THE DRAWINGSMany aspects of the present disclosure can be better understood with reference to the following drawings. The components in the drawings are not necessarily to scale. Instead, emphasis is placed on illustrating clearly the principles of the present disclosure. Moreover, in the drawings, like reference numerals designate corresponding parts throughout the several views and/or embodiments.Figure 1A is a partially schematic, cross-sectional illustration of an SSL device having a lateral arrangement in accordance with the prior art.Figure 1B is a partially schematic, cross-sectional illustration of another SSL device having a vertical arrangement in accordance with the prior art.Figure 2A is a schematic block diagram of a system configured in accordance with an embodiment of the presently disclosed technology.Figure 2B is a cross-sectional view of an SST device having an electrostatic discharge device, configured and integrated in accordance with embodiments of the presently disclosed technology.Figures 3A-3G are cross-sectional views of a portion of a microelectronic substrate undergoing a process for forming an SST device and an associated electrostatic discharge device in accordance with embodiments of the presently disclosed technology.Figure 4 is a cross-sectional view of an SST device having an electrostatic discharge device configured and integrated in accordance with embodiments of the presently disclosed technology.Figures 5A and 5B are cross-sectional views of the SST device of Figure 4 during operation in accordance with embodiments of the presently disclosed technology.Figure 6 is a partially schematic illustration of an SST device having an integrated photodiode formed from an epitaxial growth substrate in accordance with an embodiment of the presently disclosed technology.Figure 7 is a partially schematic, cross-sectional illustration of an SST device having an integrated photodiode formed on an additional substrate material in accordance with another embodiment of the presently disclosed technology.Figures 8A-8L are partially schematic, cross-sectional illustrations of a process for forming an SST device having an integrated photodiode located beneath an active material in accordance with another embodiment of the presently disclosed technology.Figure 9 is a partially schematic, isometric illustration of an SST device having an integrated thermal sensor in accordance with still another embodiment of the presently disclosed disclosure.DETAILED DESCRIPTIONSpecific details of several embodiments of representative SST devices and associated methods of manufacturing SST devices are described below. The term "SST" generally refers to solid-state transducer devices that include a semiconductor material as the active medium to convert electrical energy into electromagnetic radiation in the visible, ultraviolet, infrared, and/or other spectra. For example, SSTs include solid-state light emitters (e.g., LEDs, laser diodes, etc.) and/or other sources of emission other than electrical filaments, plasmas, or gases. In other embodiments, SSTs can include solid-state devices that convert electromagnetic radiation into electricity. The term solid state emitter ("SSE") generally refers to the solid state components or light emitting structures that convert electrical energy into electromagnetic radiation in the visible, ultraviolet, infrared, and/or other spectra. SSEs include semiconductor LEDs, PLEDs, OLEDs, and/or other types of solid state devices that convert electrical energy into electromagnetic radiation in a desired spectrum. A person skilled in the relevant art will understand that the new, presently disclosed technology may have additional embodiments and that this technology may be practiced without several of the details of the embodiments described below with reference to Figures 2A-9 .Reference herein to "one embodiment", "an embodiment", or similar formulations, means that a particular feature, structure, operation, or characteristic described in connection with the embodiment, is included in at least one embodiment of the present technology. Thus, the appearances of such phrases or formulations herein are not necessarily all referring to the same embodiment. Furthermore, various particular features, structures, operations, or characteristics may be combined in any suitable manner in one or more embodiments.In particular embodiments, a solid state transducer system includes a support substrate and a solid state emitter carried by the support substrate. The solid state emitter can comprise a first semiconductor component, a second semiconductor component, and an active region between first and second semiconductor components. The system further includes a state device carried by the support substrate and positioned to detect a state of the solid state emitter and/or an electrical path of which the solid state emitter forms a part. The state device is formed from at least one state-sensing component having a composition different than that of the first semiconductor component, the second semiconductor component, and the active region. The state device and the solid state emitter can be stacked along a common axis. For example, in particular embodiments, the state device can include an electrostatic discharge protection device, a photosensor, or a thermal sensor. The state device can be formed integrally with the solid state emitter, using (in at least some embodiments) a portion of the same epitaxial growth substrate used to form the SSE. The state device can be formed above or below the stacking axis of the solid state emitter, directly along the axis, or off the axis, depending upon the particular embodiment.Figure 2A is a schematic illustration of a representative system 290. The system 290 can include an SST device 200, a power source 291, a driver 292, a processor 293, and/or other subsystems or components 294. The resulting system 290 can perform any of a wide variety of functions, such as backlighting, general illumination, power generation, sensing, and/or other functions. Accordingly, representative systems 290 can include, without limitation, hand-held devices (e.g., cellular or mobile phones, tablets, digital readers, and digital audio players), lasers, photovoltaic cells, remote controls, computers, lights and lighting systems, and appliances (e.g., refrigerators, for example). Components of the system 290 may be housed in a single unit or distributed over multiple, interconnected units (e.g., through a communications network). The components of the system 290 can also include local and/or remote memory storage devices, and any of a wide variety of computer-readable media.In many instances, it is desirable to monitor the performance of the SST device 200 and/or the environment in which the SST device 200 operates, and make appropriate adjustments. For example, if the SST device 200 is subjected to an excessive voltage (e.g., an electrostatic discharge or "ESD"), it is desirable to protect the device with a diode or other non-linear circuit component. If the SST device 200 approaches an overheat condition, it may be desirable to reduce the current supplied to the device until the device cools down. If the SST device 200 includes a solid state lighting (SSL) device, and the light emitted by the device does not meet target emission specifications, it may be desirable to adjust the output of the device. In each of these representative examples, the system 290 can includes a state monitor or device 295 that monitors a state of the SST device 200, and participates in or facilitates a response. In some cases the state monitor 295 can act directly to provide a response. For example, a diode wired in parallel with the SST device 200 can respond directly to a high voltage by closing, causing the current to bypass the SST device 200. In other embodiments, the state monitor 295 can respond with the assistance of another device, e.g., the processor 293. For example, if the state monitor 295 is a photosensor, it can provide a signal to the processor 293 corresponding to a warmth, color and/or other characteristic of the emitted light, and the processor 293 can issue a responsive command to change the output of the SSE. In another embodiment, the state monitor 295 includes a thermistor, and can provide to the processor 293 a signal corresponding to a high temperature condition. The processor 293 can respond by directing the SST device 200 to reduce power or cease operation until the temperature falls, in order to reduce the impact of the elevated temperature on the SST device 200.Specific examples of state monitors that include ESD protection devices are described below with reference to Figures 2B-5B . Certain features of these examples are also described in co-pending U.S. Application No. 13/223,098 , titled "Solid State Lighting Devices, Including Devices Having Integrated Electrostatic Discharge Protection, and Associated Systems and Methods," filed on August 31, 2011, and incorporated herein by reference. Examples of state monitors that include photosensors are described below with reference to Figures 6-8L , and examples of state monitors that include thermal sensors (e.g., thermistors) are described below with reference to Figure 9 . In any of these embodiments, the state monitor can detect the state of the SSE (e.g., as is the case with a photosensor and a thermal sensor) and/or the state of an electrical path or circuit of which the SSE forms or part (as is the case with an ESD diode).Figure 2B is a cross-sectional view of an SST device 200 configured in accordance with embodiments of the presently disclosed technology. The SST device 200 can include an SSE 202 mounted to or otherwise carried by a support substrate 230. The SST device 200 further includes a state device or monitor 295 in the form of an electrostatic discharge device 250 carried by the SSE 202. Accordingly, the electrostatic discharge device 250 represents a specific example of a state monitor. As will be described further below, the electrostatic discharge device 250 can be manufactured to be integral with the SST device 200 (and in particular, the SSE 202) e.g., to improve system reliability, manufacturability and/or performance, and/or to reduce system size.The SSE 202 can include a first semiconductor material 204, a second semiconductor material 208, and an active region 206 between the first and second semiconductor materials 204, 208. In one embodiment, the first semiconductor material 204 is a P-type gallium nitride ("GaN") material, the active region 206 is an indium gallium nitride ("InGaN") material, and the second semiconductor material 208 is an N-type GaN material. In other embodiments, the semiconductor materials of the SSE 202 can include at least one of gallium arsenide ("GaAs"), aluminum gallium arsenide ("AlGaAs"), gallium arsenide phosphide ("GaAsP"), aluminum gallium indium phosphide (AlGaInP), gallium(III) phosphide ("GaP"), zinc selenide ("ZnSe"), boron nitride ("BN"), aluminum nitride ("A1N"), aluminum gallium nitride ("AlGaN"), aluminum gallium indium nitride ("AlGaInN"), and/or another suitable semiconductor material.The illustrated electrostatic discharge device 250 includes an epitaxial growth substrate 210 and a semiconductor material 216 (e.g., a buffer material). The electrostatic discharge device 250 further includes a first contact 246 (e.g., formed from a first conductive material) electrically connected to a via 240 that extends through the electrostatic discharge device 250 and through a portion of the SSE 202. The first contact 246 electrically contacts a conductive (and typically reflective) material 220 below the active region 206 and can provide an external terminal for interfacing with a power source or sink. Accordingly, the conductive material 220 operates as a P-contact. The first contact 246 is electrically insulated in the via 240 from the surrounding semiconductor material 216 and portions of the SSE 202 by an insulator 242. The illustrated electrostatic discharge device 250 further includes a second contact 248 (e.g., formed from a second conductive material) that doubles as an N-contact for the SSE 202. Accordingly, the second contact 248 can extend over an upper surface 209 of the SSE 202 e.g., in contact with the N-type material 208. The second contact 248 is electrically insulated from the semiconductor material 216 by a second insulator 244, and is transparent to allow radiation (e.g., visible light) to pass out through the external surface of the SST device 200 from the active region 206. In the illustrated embodiment, the first contact 246 and the second contact 248 are shared by the SSE 202 and the electrostatic discharge device 250. More specifically, the first contact 246 is electrically coupled to both the first semiconductor layer 204 of the SSE 202 and the epitaxial growth substrate 210 of the electrostatic discharge device 250. The second contact 248 is electrically coupled to both the second semiconductor layer 208 of the SSE 202 and the epitaxial growth substrate 210 of the electrostatic discharge device 250. Accordingly, the electrostatic discharge device 250 is connected in parallel with the SSE 202. The conductive materials forming the first contact 246, the second contact 248 and an electrical path though the via 240 can be the same or different, depending upon the particular embodiment. For example, the via 240 can include a third conductive material that is the same as the first conductive material, though it may be deposited in a separate step.The SST device 200 can be coupled to a power source 270 that is in turn coupled to a controller 280. The power source 270 provides electrical current to the SST device 200, under the direction of the controller 280. During normal operation, as current flows from the first semiconductor material 204 to the second semiconductor material 208, charge-carriers flow from the second semiconductor material 208 toward the first semiconductor material 204 and cause the active region 206 to emit radiation. The radiation is reflected outwardly by the conductive, reflective material 220. The electrostatic discharge device 250 provides a bypass path for current to flow between the first contact 246 and the second contact 248 under high (e.g., excessive) voltage conditions. In particular, the epitaxial growth substrate 210 between the first contact 246 and the second contact 248 can form a diode in parallel with the SSE 202, but with the opposite polarity. During normal operating conditions, the bias of the epitaxial growth substrate 210 prevents current flow through it from the first contact 246 to the second contact 248, forcing the current to pass through the SSE 202. If a significant reverse voltage is placed across the contacts 246, 248, (e.g., during an electrostatic discharge event), the epitaxial growth substrate 210 becomes highly conductive in the reverse direction, allowing the reverse current to flow through it, thus protecting the SST device from the reverse current flow.The present technology further includes methods of manufacturing SST devices. For example, one method of forming a SST device can include forming an SSE and an electrostatic discharge device from a common epitaxial growth substrate. Representative steps for such a process are described in further detail below with reference to Figures 3A-3G .Figures 3A-3G are partially schematic, cross-sectional views of a portion of a microelectronic substrate 300 undergoing a process of forming an embodiment of the SST device 200 described above, in accordance with embodiments of the technology. Figure 3A shows the substrate 300 after a semiconductor material 216 (e.g., a buffer material) has been disposed on the epitaxial growth substrate 210. The epitaxial growth substrate 210 can be silicon (e.g., Si (1,0,0) or Si (1,1,1)), GaAs, silicon carbide (SiC), polyaluminum nitride ("pAIN"), engineered substrates with silicon epitaxial surfaces (e.g., silicon on polyaluminum nitride), and/or other suitable materials. The semiconductor material 216 can be the same material as the epitaxial growth substrate 210 or a separate material bonded to the epitaxial growth substrate 210. For example, the epitaxial growth substrate 210 can be pAIN and the semiconductor material 216 can be Si (1,1,1). In any of these embodiments, the SSE 202 is formed on the semiconductor material 216.The SSE 202 includes the first semiconductor material 204, the active region 206, and the second semiconductor material 208, which can be sequentially deposited or otherwise formed using chemical vapor deposition ("CVD"), physical vapor deposition ("PVD"), atomic layer deposition ("ALD"), plating, or other techniques known in the semiconductor fabrication arts. In the embodiment shown in Figure 3A , the second semiconductor material 208 is grown or formed on the semiconductor material 216, the active region 206 is grown or formed on the second semiconductor material 208, and the first semiconductor material 204 is grown or formed on the active region 206. In one embodiment, N-type GaN (as described above with reference to Figure 2B ) is positioned proximate to the epitaxial growth substrate 210, but in other embodiments P-type GaN is positioned proximate to the epitaxial growth substrate 210. In any of these embodiments, the SSE 202 can include additional buffer materials, stress control materials, and/or other materials, and/or the materials can have other arrangements known in the art.In the embodiment shown in Figure 3A , a conductive, reflective material 220a is formed over the first semiconductor material 204. The conductive, reflective material 220a can be silver (Ag), gold (Au), gold-tin (AuSn), silver-tin (AgSn), copper (Cu), aluminum (Al), or any other suitable material that can provide electrical contact and reflect light emitted from the active region 206 back through the first semiconductor material 204, the active region 206, and the second semiconductor material 208, as described above with reference to Figure 2B . The conductive, reflective material 220a can be selected based on its thermal conductivity, electrical conductivity, and/or the color of light it reflects. For example, silver generally does not alter the color of the reflected light. Gold, copper, or other colored reflective materials can affect the color of the light and can accordingly be selected to produce a desired color for the light emitted by the SSE 202. The conductive, reflective material 220a can be deposited directly on the first semiconductor material 204, or a transparent electrically conductive material 221 (shown in broken lines) can be disposed between the first semiconductor material 204 and the reflective material 220a. The transparent electrically-conductive material 221 can be indium tin oxide (ITO) or any other suitable material that is transparent, electrically conductive, and adheres or bonds the reflective material 220a to the first semiconductor material 204. The transparent, electrically conductive material 221 and the reflective material 220a can be deposited using CVD, PVD, ALD, plating, or other techniques known in the semiconductor fabrication arts. The transparent, electrically conductive material 221 and/or the reflective material 220a can accordingly form a conductive structure 222 adjacent to (e.g., in contact with) the SSE 202.Figure 3B illustrates an embodiment of a support substrate 230 being bonded or otherwise attached to the SSE 202. The support substrate 230 can include an optional backside reflective material 220b. The backside reflective material 220b is bonded or otherwise attached to the reflective material 220a using an elevated pressure and/or elevated temperature process.Figure 3C shows an embodiment in which the bonded reflective materials 220a, 220b ( Figure 3B ) form a combined reflective material 220. The epitaxial growth substrate 210 has also been thinned, e.g., by backgrinding. At this point, the remaining epitaxial growth substrate 210 can be implanted with a p-type dopant (e.g., boron) to form a p-n junction with the underlying silicon or other semiconductor material 216. In another embodiment, the substrate 210 can be doped in a prior step. In either embodiment, because the semiconductor material 216 typically includes buffer layers to facilitate forming the SSE 202, and because the buffer layers typically include undoped, large-bandgap semiconductor layers (e.g., GaN, AlGaN or A1N), the p-n junction will be electrically isolated from the epitaxial junction that forms the SSE 202.Figure 3D illustrates the microelectronic substrate 300 after (a) the epitaxial growth substrate 210 has been background and/or etched, (b) the substrate 300 has been inverted, and (c) the epitaxial growth substrate 210 has been doped. Most of the semiconductor material 216 and the epitaxial growth substrate 210 has been removed using grinding, etching, and/or other processes to expose an outer surface 209 of the second semiconductor material 208 or other portions of the SSE 202. A portion of the semiconductor material 216 and the epitaxial growth substrate 210 remain on the SSE 202 to form the electrostatic discharge device 250. This is one manner in which the electrostatic discharge device 250 can be made integral with the SSE 202 and the SST 300. In further embodiments, the same or similar techniques can be used to form multiple electrostatic discharge devices 250 integral with the SSE 202 e.g., after the surface 209 has been selectively etched or otherwise treated.Figure 3E illustrates the microelectronic substrate 300 after a via 240 has been formed through the electrostatic discharge device 250 and a portion the SSE 202. The via 240 can be formed by drilling, etching, or other techniques known in the semiconductor fabrication arts. The via 240 includes sidewalls 241 and provides access to the reflective material 220 which is in electrical communication with the first semiconductor material 204. In other embodiments, the via 240 provides access to the conductive material 221, which is in direct electrical contact with the first semiconductor material 204. Figure 3F shows the microelectronic substrate 300 after a first insulator 242 has been deposited or formed in the via 240 and a second insulator 244 has been deposited or formed on a lateral sidewall 243 of the electrostatic discharge device 250.Figure 3G shows the microelectronic substrate 300 after a conductive material has been disposed in the via 240 (inward of the first insulator 242), and outside the via 240 to form the first contact 246. The first contact 246 can comprise silver (Ag), gold (Au), gold-tin (AuSn), silver-tin (AgSn), copper (Cu), aluminum (Al), and/or other conductive materials. The first contact 246 is insulated from the semiconductor material 216 and the SSE 202 by the first insulator 242. The second contact 248 has been deposited or otherwise disposed or formed on the outer surface 209 of the SSE 202 and on the epitaxial growth substrate 210 of the electrostatic discharge device 250. The second insulator 244 insulates the second contact 248 from the semiconductor material 216.In selected embodiments, a lens (not shown in Figure 3G ) can be formed over the SSE 202. The lens can include a light-transmissive material made from silicone, polymethylmethacrylate (PMMA), resin, or other materials with suitable properties for transmitting the radiation emitted by the SSE 202. The lens can be positioned over the SSE 202 such that light emitted by the SSE 202 and reflected by the reflective material 220 passes through the lens. The lens can include various optical features, such as a curved shape, to diffract or otherwise change the direction of light emitted by the SSE 202 as it exits the lens.Embodiments of the integral electrostatic discharge device 250 offer several advantages over traditional systems. For example, because in particular embodiments the electrostatic discharge device 250 is comprised of materials (e.g., the epitaxial growth substrate 210 and the semiconductor material 216) that are also used to form the SSE 202, the material cost can be less than that of separately-formed electrostatic devices. Moreover, traditional systems having a separate electrostatic discharge die require additional pick-and-place steps to place the die proximate to the SSE 202. Still further, such traditional systems require forming additional and/or separate electrical connections to connect the electrostatic device to the SSE.Figure 4 is a cross-sectional view of an SST device 400 having an electrostatic discharge device 450 configured in accordance with further embodiments of the present technology. The SST device 400 can have several features generally similar to those described above with reference to Figures 2-3G . For example, the SST device 400 can include an SSE 202 that in turn includes a first semiconductor material 204 (e.g., a P-type material), a second semiconductor material 208 (e.g., an N-type material), and an active region 206 between the first and second semiconductor materials 204, 208. The SST device 400 can further include a reflective material 220 between the support substrate 230 and the SSE 202. Typically, the SSE 202 and the reflective/conductive material 220 are formed on an epitaxial growth substrate 210 (shown in dashed lines in Figure 4 ). The structures that form the electrostatic discharge device 450 and that electrically connect the electrostatic discharge device 450 to the SSE can be formed on the SSE 202 while the SSE 202 is supported by the epitaxial growth substrate 210. The epitaxial growth substrate 210 can then be removed.In the illustrated embodiment, the electrostatic discharge device 450 is fabricated on the SSE 202, and both the SSE 202 and the electrostatic discharge device 450 are carried by the substrate 230, with the electrostatic discharge device 450 positioned between the substrate 230 and the SSE 202. Typically, the fabrication steps for forming the electrostatic discharge device 450 are performed while the SSE 202 is inverted from the orientation shown in Figure 4 , and before the substrate 230 is attached. The electrostatic discharge device 450 can include a plurality of electrostatic junctions 460 (identified individually as first-third junctions 460a-460c). Each electrostatic junction 460 can include a first conductive material 454 (identified individually by reference numbers 454a-454c), an intermediate material 456 (identified individually by reference numbers 456a-456c), and a second conductive material 458 (identified individually by reference numbers 458a-458c). The materials can be disposed using any of a variety of suitable deposition, masking, and/or etching processes. These materials can be different than the materials forming the SSE 202 because they are not required to perform a light emitting function. As noted above and as will be understood by one of ordinary skill in the art, these techniques can be used to sequentially form the illustrated layers on the SSE 202 while the SST 400 is inverted relative to the orientation shown in Figure 4 . One or more insulating materials 461 electrically isolates the layers from the first semiconductor material 204 and/or from the support substrate 230.The intermediate material 456 can have electrical properties different than those of the first conductive material 454 and the second conductive material 458. In some embodiments, the intermediate material 456 can be a semiconductor (e.g., amorphous silicon) or a metal. The first conductive material 454a of one junction (e.g., the first junction 460a) is electrically coupled to the second conductive material 458b of an adjacent junction (e.g., the second junction 460b). While the illustrated electrostatic discharge device 450 includes three junctions 460 placed in series, in further embodiments more or fewer junctions 460 can be used. Furthermore, to obtain different current-handling capacities for the electrostatic discharge device 450, the junctions 460 can be altered in size, and/or multiple junctions 460 can be arranged in parallel.The electrostatic discharge device 450 can further include a first contact 448 positioned at a first via 449 and electrically connected between one of the junctions 460 (e.g., to the first metal layer 454c of the third junction 460c), and to the second semiconductor material 208. The electrostatic discharge device 450 additionally includes a second contact 446 positioned at a second via 440 extending through the electrostatic discharge device 450. The second contact 446 electrically couples a junction 460 (e.g., the second metal layer 458a of the first junction 460a) to the reflective material 220 or, in further embodiments, to a separate conductive layer or to the first semiconductor material 204. The substrate 230 can be conductive so as to route current to the second contact 446. An insulating material 461 electrically isolates the first and second contacts 446, 448 from adjacent structures.In some embodiments, components of the electrostatic discharge device 450 are deposited on the SSE 202 by PVD, ALD, plating, or other techniques known in the semiconductor fabrication arts. The first and second vias 449 and 440 can be formed in the electrostatic discharge device 450 and/or the SSE 202 using the methods described above with reference to Figure 3E . In a representative embodiment, the electrostatic discharge device 450 is formed on the SSE 202 before the substrate 230 is attached. In some embodiments, the electrostatic discharge device 450 can be attached to the substrate and/or the SSE 202 by means of bonding layers. In still further embodiments, the electrostatic discharge device 450 can be positioned on a portion of an external surface of the SSE 202 without the substrate 230.Figures 5A and 5B are cross-sectional views of the SST device 400 of Figure 4 during operation in accordance with embodiments of the technology. During normal operation, as illustrated in Figure 5A , current flows in the direction of the arrows from the second contact 446 to the first semiconductor material 204, through the SSE 202 to the second semiconductor material 208 as described above, to the first contact 448. As illustrated in Figure 5B , during an electrostatic discharge event, the SST device 400 can be protected from reverse currents by providing a path for reverse current flow, illustrated by the arrows, through the junctions 460. The reverse current can be directed through the substrate 230, rather than through the SSE 202.Figure 6 is a partially schematic, partial cross-sectional illustration of a system 600 that includes a solid state emitter 202 having components generally similar to those described above, including an active region 206 positioned between a first semiconductor material 204 and a second semiconductor material 208. The SSE 202 is carried by a support substrate 230, and a conductive/reflective material reflects emitted radiation outwardly through the second semiconductor material 208. The support substrate 230 can be conductive and can accordingly function as a first contact 646. The SSE 202 receives power from the first contact 646 and a second contact 648.The system 600 can further include a state device 695 that in turn includes a photosensor 650 (e.g., a photodiode). The photosensor 650 can be formed using residual material from the buffer layer 216 and the epitaxial growth substrate 210, in a manner generally similar to that described above with reference to Figures 2B-3D . In a particular aspect of an embodiment shown in Figure 6 , the epitaxial growth substrate 210 is doped and/or otherwise treated to form a photosensitive state-sensing component 611. Representative materials for forming the state-sensing component 611 include silicon germanium, gallium arsenide and lead sulfide. The state-sensing component 611 can be coupled to a first state device contact 651 and a second state device contact 652, which are in turn connected to the controller 280. An insulating material 653 provides electrical insulation between the photosensor 650 and the second contact 648. In a further particular aspect of this embodiment, the buffer layer 216 is transparent, allowing light emitted from the active region 206 to impinge upon the state-sensing component 611. This in turn can activate the state-sensing component 611, which in turn transmits a signal to the controller 280. Based upon the signal received from the state device 695, the controller can direct the power source 270 to supply, halt, and/or change the power provided to the SSE 202. For example, if the state device 695 identifies a low output level for the SSE 202, the controller 280 can increase the power provided to the SSE 202. If the SSE 202 produces more than enough light, the controller 280 can reduce the power supplied to the SSE 202. If the color, warmth, and/or other characteristic of the light detected by the state device 695 falls outside a target range, the controller 280 can control the power provided to the SSE 202 and/or can vary the power provided to multiple SSEs 202 that together produce a particular light output.Figure 7 is a partially schematic, partial cross-sectional illustration of a device 700 that includes a state device 795 in the form of a photosensor 750 in accordance with another embodiment. Unlike the arrangement described above with reference to Figure 6 , the photosensor 750 shown in Figure 7 is not formed from residual material used to form the SSE 202. Instead, the photosensor 750 can include a state-sensing component 711 and an electrically conductive, transparent material 712 (e.g., zinc oxide) disposed between the state-sensing component 711 and the second semiconductor material 208. The state-sensing component 711 can include amorphous silicon and/or another material that is responsive to light emanating from the active region 206 and passing through the conductive/transparent material 712. The state device 795 can further include first and second state device contacts 751, 752 that transmit signals to the controller 280 corresponding to the amount, quality and/or other characteristic of the light received from the active region 206. An insulating material 753 provides electrical insulation between the state device 795 and the second contact 648. Accordingly, the system 700 (and in particular, the controller 280) can direct the operation of the SSE 202 based upon information received from the state device 795.In both of the embodiments described above with reference to Figures 6 and 7 , the state device and state-sensing component are positioned so as to receive at least some of the light that would normally be transmitted directly out of the solid state transducer. In particular, the state-sensing devices can be positioned along a line of sight or optical axis between the active region 206 and the external environment that receives light from the active region 206. In other embodiments, the state-sensing device can be buried within or beneath the SSE 202 of the optical axis in a manner that can reduce or eliminate the potential interference of the state-sensing devices with light or other radiation emitted by the SSE 202. Figures 8A-8L describe a process for forming such devices in accordance with particular embodiments of the disclosed technology.Figure 8A illustrates a device 800 during a particular phase of manufacture at which the device 800 includes components generally similar to those described above with reference to Figure 3A . Accordingly, the system can include an epitaxial growth substrate 210 upon which a buffer layer 216 and an SSE 202 are fabricated. The SSE 202 can include an active region 206 positioned between first and second semiconductor materials 204, 208. A conductive, reflective material 220 is positioned to reflect incident light away from these first semiconductor material 204 and through the active region 206 and the second semiconductor material 208.The processes described below with reference to Figures 8B-8L include disposing and removing material using any of a variety of suitable techniques, including PVD or CVD (for deposition) and masking/etching for removal. Using these techniques, sequential layers of material are stacked along a common axis to produce the final product. Beginning with Figure 8B , a recess 801 is formed in the conductive, reflective material 220. The recess 801 allows light to pass from the SSE 202 to a photosensitive state device formed in and/or in optical communication with the recess 801. In Figure 8C , a transparent insulating material 802 is disposed in the recess 801. In Figure 8D , a transparent conductive material 712 is disposed on the transparent insulating material 802 within the recess 801. As shown in Figure 8E , a portion of the transparent conductive material 712 is removed, and the space formerly occupied by the removed portion is filled with additional transparent insulating material 802. Accordingly, the transparent conductive material 712 is electrically isolated from the surrounding conductive reflective material 220 by the transparent insulating material 802.In Figure 8F , an additional layer of transparent insulating material 802 is disposed over the transparent conductive material 712. In Figure 8G , a portion of the transparent insulating material 802 positioned over the transparent conductive material 712 is removed and replaced with a state-sensing component 811. In a representative embodiment, the state-sensing component 811 include amorphous silicon, and in other embodiments, the state-sensing component 811 can include other materials. In any of these embodiments, an additional volume of transparent insulating material 802 is disposed on one side of the state-sensing component 811, and a first contact material 803 is disposed on the other side so as to contact the transparent conductive material 712.In Figure 8H , yet a further layer of transparent insulating material 802 is disposed on the underlying structures. A portion of this layer is removed and filled with additional first contact material 803 to form an electrical contact with one side of the state-sensing component 811 via the transparent conductive material 712. A second contact material 804 is disposed in contact with the opposite surface of the state-sensing component 811 to provide for a complete circuit.In Figure 8I , a further layer of transparent insulating material 802 is disposed over the first and second contact materials 803, 804, and a substrate support 830 is attached to the insulating material 802. The structure is then inverted, as shown in Figure 8J and the epitaxial growth substrate 210 and buffer material 216 shown in Figure 8I are removed. Accordingly, the second semiconductor 208 material is now exposed. In Figure 8K , a plurality of vias 840 (four are shown in Figure 8K as vias 840a-840d) are made through the substrate support 230 to an extent sufficient to make electrical contact with multiple components within the device 800. For example, a first via 840a makes contact with the second semiconductor material 208 (or, as indicated in dashed lines, a transparent conductive layer overlying the second semiconductor material 208), a second via 840b makes contact with the conductive, reflective material 220, a third via 840c makes contact with the second contact material 804, and a fourth via makes contact with the first contact material 803. Each of the vias 840a-840d is lined with an insulating material 805 to prevent unwanted electrical contact with other elements in the stack.Figure 8L is a partially schematic illustration of the device 800 after each of the vias 840 has been filled with a conductive material 806. The conductive material 806 forms first and second contacts 846, 848, which provide power from the power source 270 to the SSE 202. The conductive material 806 also forms first and second state device contacts 851, 852 that provide electrical communication with the controller 280. As in the case of the embodiments described above with reference to Figures 6 and 7 , the resulting state device 895 is stacked along a common axis with the SSE 202. Unlike the arrangement described above with reference to Figures 6 and 7 , the state device 895 (in the form of a photosensor 850) is not in the direct optical path of light or other radiation emitted by the SSE 202. In operation, the state-sensing component 811 receives radiation through the transparent, insulating material 802 and the transparent conductive material 712. Based upon the radiation incident on the state-sensing component 811, the photosensor 850 can send a signal to the controller 280 which in turn controls the power source 270 and the SSE 202.Further details of particular embodiments for constructing an SST device generally similar to that described above with reference to Figures 8A-8L are included in co-pending U.S. Application No. 13/218,289 , titled "Vertical Solid State Transducers Having Backside Terminals and Associated Systems and Methods", filed on August 25, 2011, and incorporated herein by reference. In other embodiments, the SST devices can be coupled to external devices with contacts having positions, arrangements, and/or manufacturing methodologies different than those expressly described above.Figure 9 is a partially schematic, partially exploded illustration of an SST device 900 that include a state device 995 configured to detect thermal characteristics associated with the SSE 202. In the illustrated embodiment, the state device 995 can include an insulating layer 902 positioned between the conductive reflective contact 220 and a state-sensing component 911. In a further particular embodiment, the state-sensing component 911 can include a thermistor material (e.g., a suitable polymer or ceramic) and in other embodiments, the state-sensing component 911 can include other thermally sensitive materials (e.g., resistive metals). In any of these embodiments, an additional volume of insulting material 902 can be positioned against the state-sensing component 911 to "sandwich" the state-sensing component 911 and electrically insulate the state-sensing component 911 from the SSE 202. First and second state device contacts 951, 952 provide electrical communication with the state-sensing component 911. In particular embodiments, the state-sensing component 911 can include a material strip with a serpentine shape that increases component sensitivity (e.g., increases impedance or resistance change as a function of temperature). In other embodiments, the state-sensing component 911 can have other shapes. The state device contacts 951, 952 and the SSE contacts can have any of a variety of locations, including those shown in Figure 9 . For example, all the contacts can be located at the top of the device, or the state device contacts can be at the top of the device and one or more SSE contacts at the bottom of the device, or all the contacts can be buried (e.g., as shown in Figure 8L ). These options apply to the ESD state-sensing components and optical state-sensing components described above with reference to Figures 2B-8L as well.In operation, the state-sensing component 911 can be coupled to a controller generally similar to that described above with reference to Figure 7 , and can control the operation of the SSE in a manner based upon thermal inputs. In particular, the state-sensing component 911 can sense the temperature of the SSE 202 and/or other components of the SST device 900. In response to a high temperature indication, the controller can reduce the power provided to the SST device 900 to allow the SST device 900 to cool before it becomes damaged. After the SST device 900 has cooled (an event also indicated by the state-sensing component 911), the controller can increase the power provided to the SST device 900. An advantage of the arrangement described above with reference to Figure 9 is that the state-sensing component 911 can provide feedback that reduces high temperature operation of the SSE 202. In particular, the feedback can be used to account for reduced SSE output, reduced safe drive current, reduced forward voltage and/or reduced SSE lifetime, all of which are associated with high temperature operation.One feature of several of the embodiments described above is that the state-sensing component can be formed so as to be integral with the SST and/or the SSE. Embodiments of the integrally formed state devices are not pre-formed structures and accordingly are not attachable to the SST as a unit, or removable from the SST as a unit without damaging or rendering inoperable the SSE. The SSE and the state device can accordingly be formed as a single chip or die, rather than being formed as two separate dies that may be electrically connected together in a single package. For example, the SSE and the state device can both be supported by the same, single support substrate (e.g., the support substrate 230). For example, they can be formed from a portion of the same substrate on which the solid state emitter components are formed, as described above with reference to Figures 2-3G and 6 . In the embodiments described with reference to Figures 4 , 5 , 7 and 8A-8L , the same epitaxial growth substrate is not used for both the solid state emitter and the state device, but the components that form the state device can be formed in situ on the solid state emitter. An advantage of the latter approach is that, in at least some embodiments, the state device can be formed so as to be on the side of the solid state emitter opposite from the path of light emitted by the solid state emitter. Accordingly, the presence of the state device does not interfere with the ability of the solid state emitter to emit light or other radiation.Although the state device can be formed integrally with the SSE or SST, it performs a function different than that of the SSE and, accordingly, includes materials different than those that form the SSE (e.g., different than the first semiconductor material, the second semiconductor material, and the active region in between). This is the case whether the same epitaxial growth substrate used for the solid state emitter is used for the state device, or whether the state device does not use the same epitaxial growth substrate. As a result, the materials and structural arrangement of the state device are not limited to the materials and structural arrangement of the SSE. This enhanced degree of flexibility can allow for smaller state devices and greater state device efficiencies. For example, state devices in the form of photodiodes can include materials that are specifically selected to be thin and/or highly absorptive at the wavelength emitted by the SSE, producing a compact, efficient structure.From the foregoing, it will be appreciated that specific embodiments of the technology have been described herein for purposes of illustration, but that various modifications may be made without deviating from the disclosure. For example, some of the embodiments described above discuss the state devices as a diode (e.g., an ESD protection diode or a photodiode). In other embodiments, the state device can include a different, non-linear circuit element. In still further embodiments, the state device may be linear (e.g., the thermal sensor can be a linear thermal sensor). The electrostatic discharge device can be constructed and connected to protect the SSE from large reverse voltages, as discussed above in particular embodiments. In other embodiments, the electrostatic discharge device can be connected with a forward bias to prevent the SSE from large forward voltages. In still further embodiments, the SSE can be connected to both types of ESDs, to protect against both high forward and high reverse voltages. Additionally, in certain embodiments, there may be more than one state devices for a particular SST device. Furthermore, material choices for the SSE and substrates can vary in different embodiments of the disclosure.Certain elements of one embodiment may be combined with other embodiments, in addition to or in lieu of the elements of the other embodiments, or may be eliminated. For example, in some embodiments, the disclosed buffer material can be eliminated. In some embodiments, the buffer material can be used to form the SSE, but not the state device. The disclosed state devices can be combined in other embodiments. For example, a single SST device can include any of a variety of combinations of ESD devices, photosensors and/or thermal sensors. Accordingly, the disclosure can encompass other embodiments not expressly shown or described herein.Aspects of the invention:1. A solid state transducer system, comprising:a support substrate;a solid state emitter carried by the support substrate, the solid state emitter comprising a first semiconductor component, a second semiconductor component, and an active region between the first and second semiconductor components; anda state device carried by the support substrate and positioned to detect a state of at least one of the solid state emitter and an electrical path of which the solid state emitter forms a part, wherein the state device is formed from at least one state-sensing component having a composition different than that of the first semiconductor component, the second semiconductor component, and the active region, and wherein the state device and the solid state emitter are stacked along a common axis.2. The system of aspect 1 wherein the state device includes at least one of a thermal sensor and a photosensor, and wherein the system further comprises:a controller operatively coupled to the solid state emitter and the state device to receive a signal from the state device and control the solid state emitter based at least in part on the signal received from the state device.3. The system of aspect 2 wherein the state device includes a photosensor positioned to receive radiation emitted by the solid state emitter, and wherein the signal corresponds to a characteristic of the radiation.4. The system of aspect 2 wherein the state device includes a thermal sensor positioned to receive thermal energy produced by the solid state emitter, and wherein the signal corresponds to a temperature.5. The system of aspect 1 wherein the state device includes an electrostatic discharge device coupled in parallel with the solid state emitter, and wherein the electrostatic discharge device is responsive to a voltage applied to the state emitter.6. The system of aspect 1 wherein the solid state emitter, the state device, and the support substrate form a single die, and wherein the support substrate is the only support substrate of the die.7. The system of aspect 1 wherein the state device is formed from a plurality of materials disposed conformally and sequentially on the solid state emitter.8. The system of aspect 1 wherein the active region of the solid state emitter includes a first semiconductor material having a first composition and wherein the state-sensing component includes a second semiconductor material having a second composition different than the first composition.9. The system of aspect 1, further comprising a reflective material positioned between the solid state emitter and the state device to reflect radiation emitted by the solid state emitter, wherein the reflective material includes an aperture positioned between the active region and the state device to pass radiation from the active region to the state device.10. The system of aspect 1, further comprising an external surface through which radiation emitted by the active region passes, and wherein the state device is positioned off an optical axis between the active region and the external surface.11. The system of aspect 1, further comprising an external surface through which radiation emitted by the active region passes, and wherein the state device is positioned on an optical axis between the active region and the external surface.12. The system of aspect 1, further comprising:first and second emitter contacts, the first emitter contact electrically connected to the first semiconductor component, the second emitter contact electrically connected to the second semiconductor component; andfirst and second state device contacts connected to the state device, the emitter contacts being addressable separately from the state device contacts.13. The system of aspect 1 wherein the emitter contacts and the state device contacts are accessible from the same side of the solid state emitter.14. The system of aspect 1 wherein the state device contacts and one of the emitter contacts are accessible from one side of the solid state emitter, and the other emitter contact is accessible from an opposite side of the solid state emitter.15. The system of aspect 1 wherein the solid state emitter and the state device are integrally formed from portions of a common epitaxial growth substrate.16. The system of aspect 15, further comprising the epitaxial growth substrate.17. The system of aspect 1 wherein:the state device is formed from a plurality of materials disposed conformally and sequentially on the solid state emitter;the solid state emitter, the state device, and the support substrate form a single die;the support substrate is the only support substrate of the die; andthe solid state emitter and the state device are integrally formed from portions of a common epitaxial growth substrate.18. A solid state lighting system, comprising:a support substrate;a solid state emitter carried by the support substrate, the solid state emitter comprising a first semiconductor component, a second semiconductor component, and an active region between the first and second semiconductor components; anda thermal sensor carried by the support substrate and positioned to detect a thermal state of the solid state emitter, wherein the thermal sensor and the solid state emitter are stacked along a common axis.19. The system of aspect 18, further comprising a controller operatively coupled to the solid state emitter and the thermal sensor to receive a signal from the thermal sensor and control the solid state emitter based at least in part on the signal received from the thermal sensor.20. The system of aspect 18, further comprising a reflective material positioned between the solid state emitter and the thermal sensor to reflect radiation emitted by the solid state emitter.21. The system of aspect 18 wherein the thermal sensor includes a serpentine thermistor element, and wherein an impedance of the thermistor element changes as a function of temperature.22. The system of aspect 18, further comprising:first and second emitter contacts, the first emitter contact electrically connected to the first semiconductor component, the second emitter contact electrically connected to the second semiconductor component; andfirst and second thermal sensor contacts connected to the thermal sensor, the emitter contacts being addressable separately from the thermal sensor contacts.23. A method for forming a solid state lighting device, comprising:forming a solid state emitter to include a first semiconductor material, a second semiconductor material, and an active region between the first and second semiconductor materials; andforming a state device carried by the solid state emitter by disposing on the solid state emitter in a sequence of process steps and in a stacked manner, state device components, including at least one state-sensing component having a composition different than that of the first semiconductor material, the second semiconductor material, and the active region, wherein the at least one state-sensing component is positioned to detect a state of at least one of the solid state emitter and an electrical path of which the solid state emitter forms a part.24. The method of aspect 23 wherein forming the state device includes forming a thermal sensor in thermal communication with the active region of the solid state emitter.25. The method of aspect 23 wherein forming the state device includes forming a photosensor positioned to receive radiation emitted by the active region of the solid state emitter.26. The method of aspect 23 wherein disposing the at least one state-sensing component includes disposing an electrostatic discharge component.27. The method of aspect 23 wherein forming the solid state emitter includes epitaxially forming at least one component of the solid state emitter on an epitaxial growth substrate, and wherein forming the state device includes forming the state device from a portion of the epitaxial growth substrate.28. A method for forming a solid state lighting device, comprising:forming a solid state emitter to include a first semiconductor material, a second semiconductor material, and an active region between the first and second semiconductor materials; andstacking a thermal sensor relative to the solid state emitter.29. The method of aspect 28 wherein stacking the thermal sensor includessequentially disposing components of the thermal sensor on the solid state emitter.30. The method of aspect 28 wherein stacking the thermal sensor includes pre-forming at least a portion of the thermal sensor and attaching the thermal sensor to the solid state emitter.31. The method of aspect 28 wherein stacking the thermal sensor includes positioning the thermal sensor in direct contact with the solid state emitter.32. The method of aspect 28 wherein stacking the thermal sensor includes carrying both the thermal sensor and the solid state emitter with a common support substrate. |
A page processing circuit (1040) includes a memory (1034) for pages, a processor (1030) coupled to the memory, and a page wiping advisor circuit (2730) coupled to the processor and operable to prioritize pages based both on page type (TYPE in 2740) and usage statistics (STAT in 2740). Processes of manufacture, processes of operation, circuits, devices, telecommunications products, wireless handsets and systems are also disclosed. |
A page processing circuit (1040), comprising:a memory (1034) for pages;a processor (1030) coupled to said memory (1034); and characterised by a page wiping advisor circuit (2730) coupled to said processor (1030) and operable to prioritize pages for wiping based both on page type (TYPE) and usage (STAT) statistics, the page wiping advisor circuit (2730) includinga page access counter (2845, 3145) for a time-varying page-specific entry that is settable to an initial value in response to loading a page into the memory (1034), and resettable to a value approximating the initial value in response to a memory access to that page, the page access counter (2845, 3145) being operable to change in value in a progressive departure from the initial value in response to a memory access to a page other than a page to which the counter value pertainsa concatenation case table (2850, 3150) having a page-specific entry formed from a corresponding page-specific entry from said page access counter (2845, 3145), a page type entry (TYPE) and an entry indicating whether the page has been written (WR(N]=1); anda conversion circuit (2855, 2955, 3155) arranged to generate a page priority code for each page responsive to the concatenation case table (2850, 3150).The page processing circuit as claimed in claim 1, wherein the generated page priority code has a singleton bit value accompanied by complement bit values, the singleton bit value having a position across the page priority code representing page priority.The page processing circuit as claimed in claim 2, wherein said page wipingadvisor circuit (2730) has a detector (2870) to sort the page priority codes for an extreme position of the singleton bit value.The page processing circuit as claimed in claim 2 and 3, wherein said page wipingadvisor circuit (2730) is operable to identify a page to wipe by the singleton bit value in its priority code being in an extreme position (R) indicative of highest wiping priority (2870) compared to priority codes of other pages.The page processing circuit as claimed in claim 1, wherein said page wiping advisor circuit (2730) further includes a priority detector circuit (2970) coupled to the conversion circuit (2955) and operable to identify at least one page to wipe based on priority code.The page processing circuit as claimed in claim 1, wherein said page wiping advisor circuit (2730) includes a priority sorting table (2860, 3160) for page-specific wiping priority codes.The page processing circuit as claimed in claim 6, wherein said page wiping advisor circuit includes a priority sorting circuit operable (2870) to identify at least one page in the priority sorting table (2860, 3160) having a highest priority for page wiping.The page processing circuit as claimed in claim 7, wherein said page wiping advisor circuit (2730) further including a page selection logic (2885) fed by the priority sorting circuit (2870) for selecting a page in the memory (1034) to wipe.The page processing circuit as claimed in any preceding claim, wherein said the page type entry (TYPE) comprises code (CODE) or data (DATA) types of pages.The page processing circuit as claimed in claim 1, wherein said page wiping advisor circuit (2730) includes an allocation circuit operable to dynamically respond to page swaps by page type, to allocate page space in said memory.The page processing circuit as claimed in claim 10, wherein the conversion circuit (2855) is operable to prioritize pages of one page type (CODE) and separately prioritize pages of another page type (DATA).The page processing circuit as claimed in any preceding claim, wherein the page wiping advisor circuit (2730) includes a register (ADV, 2880, 3180) for holding page wiping advice, the register being coupled to said processor (1030).The page processing circuit claimed in claim 1, wherein said page wiping advisor circuit (2730) includes an interrupt (1040) coupled to said processor (1030).The page processing circuit claimed in any preceding claim, wherein said page wiping advisor circuit (2730) includes a usage level encoder (2720) operable to generate a usage level code in response to said page access counter (2845,3145).The page processing circuit claimed in any preceding claim, further comprising a cryptographic circuit coupled to said memory (1034) and operable to perform a cryptographic operation on a page identified by said page wiping advisor circuit (2730).The page processing circuit as claimed in claim 1, further comprising a secure state machine (2060) situated on a single integrated circuit chip (1100, 1400) with said processor (1030) and said memory (1034), said secure state machine (2060) monitoring accesses to said memory (1034), whereby said memory has security.The page processing circuit as claimed in any preceding claim, wherein the page access counter (2845, 3145) is operable to count both read and write accesses to respective pages in said memory (1034).The page processing circuit claimed in claim 1, further comprising an instruction bus (INST) and a data bus (DATA) coupled to said memory (1034) and wherein said page wiping advisor circuit (2730) is responsive to both said instruction bus and said data bus to form the usage statistics (STAT, 2720).The page processing circuit claimed in claim 18, further comprising a third bus (2745) and wherein said page wiping advisor circuit (2730) is additionally coupled by said third bus (2745) to said processor (1030).The page processing circuit claimed in claim 1, wherein said page wiping advisor circuit is operable to prioritize an unmodified page (WR[N]=0) in said memory as having a higher priority for wiping than a modified page (WR[N]=1).The page processing circuit claimed in claim 1, wherein the page wiping advisor circuit (2875, 2970) is operable to prioritize a first page that has one level of use in said memory (1034) as having a higher priority for wiping than a second page that has another level indicative of greater use in said memory.The page processing circuit as claimed in any preceding claim, wherein the page wiping advisor circuit is operable, when more than one page has the highest page wiping priority, to select a page to wipe from the pages having the highest page priority.The page processing circuit claimed in any preceding claim, wherein the page wiping advisor circuit (2875, 2970) is operable, when all pages have the lowest page wiping priority, to select a page to wipe from the pages.The page processing circuit claimed in claim 1, wherein said memory (1034) may have an empty page and an occupied page, and said page wiping advisor circuit (2730) is operable, when memory has an empty page, to bypass wiping an occupied page.A page processing method for use with a memory having pages, the method comprisingrepresenting a page [N] by a first entry (TYPE[N]) indicating the page type; and characterised by representing the page [N] by a second entry (WR[N]) indicating whether the page is modified or not;further representing the page by a third entry that is set to an initial value (127) by loading a page corresponding to that entry in the memory, reset to a value approximating the initial value in response to a memory access to that page, and changed in value in a progressive departure from the initial value in response to a memory access to a page other than the page to which the second entry pertains;generating a page priority code for the page from the first, second and third entries; andidentifying at least one page having a highest priority for wiping from the page priority codes.The page processing method claimed in claim 25, for use with a second memory having a larger capacity than said first memory, and further comprising demand paging between said memory and said second memory based on said page priority codes.The page processing method claimed in claim 25, further comprising swapping out the page based on said first second and third entries, to another memory.The page processing method claimed in any of claim 25, further comprising performing a cryptographic operation on the page based on said first, second and third entries.A telecommunications unit comprisinga telecommunications modem;a microprocessor coupled to said telecommunications modem; and characterised by page processing circuitry according to any of claims 1 to 24 coupled to said microprocessor. |
FIELD OF THE INVENTIONThis invention is in the field of electronic computing hardware and software and communications, and is more specifically directed to improved processes, circuits, devices, and systems for page processing and other information and communication processing purposes, and processes of making them. Without limitation, the background is further described in connection with demand paging for communications processing.BACKGROUND OF THE INVENTIONWireline and wireless communications, of many types, have gained increasing popularity in recent years. The personal computer with a wireline modem such as DSL (digital subscriber line) modem or cable modem communicates with other computers over networks. The mobile wireless (or "cellular") telephone has become ubiquitous around the world. Mobile telephony has recently begun to communicate video and digital data, and voice over packet (VoP or VoIP), in addition to cellular voice. Wireless modems, for communicating computer data over a wide area network, using mobile wireless telephone channels and techniques are also available.Wireless data communications in wireless local area networks (WLAN), such as that operating according to the well-known IEEE 802.11 standard, has become popular in a wide range of installations, ranging from home networks to commercial establishments. Short-range wireless data communication according to the "Bluetooth" technology permits computer peripherals to communicate with a personal computer or workstation within the same room. Numerous other wireless technologies exist and are emerging.Security techniques are used to improve the security of retail and other business commercial transactions in electronic commerce and to improve the security of communications wherever personal and/or commercial privacy is desirable. Security is important in both wireline and wireless communications.As computer and communications applications with security become larger and more complex, a need has arisen for technology to inexpensively handle large amounts of software program code and the data in a secure manner such as in pages for those applications and not necessarily require substantial amounts of additional expensive on-chip memory for a processor to handle those applications.Processors of various types, including DSP (digital signal processing) chips, RISC (reduced instruction set computing) and/or other integrated circuit devices are important to these systems and applications. Constraining or reducing the cost of manufacture and providing a variety of circuit and system products with performance features for different market segments are important goals in DSPs, integrated circuits generally and system-on-a-chip (SOC) design.United States Patent No. 5, 537, 571 describes a control device for a buffer memory which distinguishes information of the "instruction" type and information of the "data" type, and which replaces stored information with current information according to at least replacement algorithm. It comprises partitioning means which make available, for at least one of the said types of information therefore called a limited type, a limited amount of memory, delocalized in the buffer memory, and, when a current information item has to be loaded while the said limited amount has been overloaded by the stored information of limited type, replacement means load it by priority by replacing a stored replaceable information item of limited type.United States Patent No. 5,224,217 describes a method of implementing the "least-recently-used" (LRU) replacement algorithm in a cache memory. Each data block in the cache memory is numbered with a priority tag ranging from 0 to the number of blocks in the cache memory. The lowest numbered block is always replaced first. The just replaced block is given the highest priority tag and one is subtracted from each other priority tag. When a requested block is found in the cache, one is subtracted from each priority tag greater than the requested priority tag and the requested block is given the highest priority tag.United States Patent No. 5, 757, 919 describes a method and system for maintaining integrity and confidentiality of pages paged to an external storage unit from a physically secure environment. An outgoing page is selected to be exported from a physically secure environment to an insecure environment. An integrity check value is generated and stored for the outgoing page by taking a one-way hash of the page using a well-known one-way hash function. The outgoing page is then encrypted using a cryptographically strong encryption algorithm. The encrypted outgoing page is then exported to the external storage. By virtue of the encryption and integrity check, the security of the data on the outgoing page is maintained in the insecure environment.United States Patent No. 5, 386,546 describes a block substitution method of a cache memory including the steps of storing data integrity information with a main memory for each block of the cache memory and calculating a non-reference period of each block. The non-reference periods of the blocks are compared to determine an order of the blocks based on the non-reference periods and a difference between the non-reference period of the block having a longest non-reference period and the non-reference period of other blocks is calculated. Data integrity in the block having the longest non-reference period is examined and when there is no data integrity in that block the data integrity in other blocks is examined in the order of the non-reference period. A block having a longest non-reference period among the blocks having the data integrity is determined and the determined block is selected as a block to be substituted by a new data block when the difference is smaller than a predetermined value. New data is loaded to the selected block.Further alternative and advantageous solutions would, accordingly, be desirable in the art.SUMMARY OF THE INVENTIONThe invention resides in a page processing circuit, a page processing method and a telecommunications unit as set out in the appended claimsBRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a pictorial diagram of a communications system including system blocks, for example a cellular base station, a WLAN AP (wireless local area network access point), a WLAN gateway, a personal computer, and two cellular telephone handsets, any one, some or all of the foregoing improved according to the invention.FIG. 2 is a block diagram of inventive integrated circuit chips for use in the blocks of the communications system of FIG. 1 .FIG. 3 is a block diagram of inventive hardware and process blocks for selectively operating one or more of the chips of FIG. 2 for the system blocks of FIG. 1 .FIG. 4 is a partially-block, partially data structure diagram for illustrating an inventive process and circuit for secure demand paging (SDP).FIG. 5 is a block diagram further illustrating an inventive process and circuit for secure demand paging performing a SwapIn.FIG. 6 is a block diagram further illustrating an inventive process and circuit for secure demand paging performing a Swap Out.FIG. 7 is a block diagram further illustrating an inventive process and circuit for secure demand paging with encryption and DMA (direct memory access).FIG. 8 is a block diagram further illustrating an inventive process and circuit for secure demand paging with a hash.FIG. 9 is a block diagram of an inventive integrated circuit for inventively advising and selecting which page(s) to wipe in a demand paging process.FIG. 10 is a block diagram of inventive registers, data structures and operations for inventively advising and selecting which page(s) to wipe in the demand paging process performed by the inventive integrated circuit of FIG. 9 .FIG. 11 is a block diagram of another inventive embodiment of registers, data structures and operations for inventively advising and selecting which page(s) to wipe in the demand paging process performed by the inventive integrated circuit of FIG. 9 .FIGS. 12A and 12B are flow diagrams of inventive process embodiments for data structures and operations for inventively generating statistics, advising and selecting which page(s) to wipe in the demand paging process performed by the inventive integrated circuit of FIG. 9 . FIGS. 12A and 12B are two halves of a composite flow.FIG. 13 is a block diagram of a further inventive embodiment of registers, data structures and operations for inventively advising and selecting which page(s) to wipe in the demand paging process performed by the inventive integrated circuit of FIG. 9 .FIG. 14 is a flow diagram of an inventive of process of manufacture of various embodiments of FIGS. 1-13 .DESCRIPTION OF EMBODIMENTS OF THE INVENTIONIn FIG. 1 , an improved communications system 1000 has system blocks as described next. Any or all of the system blocks, such as cellular mobile telephone and data handsets 1010 and 1010', a cellular (telephony and data) base station 1050, a WLAN AP (wireless local area network access point, IEEE 802.11 or otherwise) 1060, a Voice WLAN gateway 1080 with user voice over packet telephone 1085 (not shown), and a voice enabled personal computer (PC) 1070 with another user voice over packet telephone 1055 (not shown), communicate with each other in communications system 1000. Each of the system blocks 1010, 1010', 1050, 1060, 1070, 1080 are provided with one or more PHY physical layer blocks and interfaces as selected by the skilled worker in various products, for DSL (digital subscriber line broadband over twisted pair copper infrastructure), cable (DOCSIS and other forms of coaxial cable broadband communications), premises power wiring, fiber (fiber optic cable to premises), and Ethernet wideband network. Cellular base station 1050 two-way communicates with the handsets 1010, 1010', with the Internet, with cellular communications networks and with PSTN (public switched telephone network).In this way, advanced networking capability for services, software, and content, such as cellular telephony and data, audio, music, voice, video, e-mail, gaming, security, e-commerce, file transfer and other data services, internet, world wide web browsing, TCP/IP (transmission control protocol/Internet protocol), voice over packet and voice over Internet protocol (VoP/VoIP), and other services accommodates and provides security for secure utilization and entertainment appropriate to the just-listed and other particular applications.The embodiments, applications and system blocks disclosed herein are suitably implemented in fixed, portable, mobile, automotive, seaborne, and airborne, communications, control, set top box, and other apparatus. The personal computer (PC) 1070 is suitably implemented in any form factor such as desktop, laptop, palmtop, organizer, mobile phone handset, PDA personal digital assistant, internet appliance, wearable computer, personal area network, or other type.For example, handset 1010 is improved and remains interoperable and able to communicate with all other similarly improved and unimproved system blocks of communications system 1000. On a cell phone printed circuit board (PCB) 1020 in handset 1010, FIGS. 1 and 2 show a processor integrated circuit and a serial interface such as a USB interface connected by a USB line to the personal computer 1070. Reception of software, intercommunication and updating of information are provided between the personal computer 1070 (or other originating sources external to the handset 1010) and the handset 1010. Such intercommunication and updating also occur automatically and/or on request via WLAN, Bluetooth, or other wireless circuitry.For example, handset 1010 is improved for selectively determinable security and economy when manufactured. Handset 1010 remains interoperable and able to communicate with all other similarly improved and unimproved system blocks of communications system 1000. On a cell phone printed circuit board (PCB) 1020 in handset 1010, there is provided a higher-security processor integrated circuit 1022, an external flash memory and SDRAM 1024, and a serial interface 1026. Serial interface 1026 is suitably a wireline interface, such as a USB interface connected by a USB line to the personal computer 1070 when the user desires and for reception of software intercommunication and updating of information between the personal computer 1070 (or other originating sources external to the handset 1010) and the handset 1010. Such intercommunication and updating also occur via a processor such as for cellular modem, WLAN, Bluetooth, or other wireless or wireline modem processor and physical layer (PHY) circuitry 1028.Processor integrated circuit 1022 includes at least one processor (or central processing unit CPU) block 1030 coupled to an internal (on-chip read-only memory) ROM 1032, an internal (on-chip random access memory) RAM 1034, and an internal (on-chip) flash memory 1036. A security logic circuit 1038 is coupled to secure-or-general-purpose-identification value (Security/GPI) bits 1037 of a non-volatile one-time alterable Production ID register or array of electronic fuses (E-Fuses). Depending on the Security/GPI bits, boot code residing in ROM 1032 responds differently to a Power-On Reset (POR) circuit 1042 and to a secure watchdog circuit 1044 coupled to processor 1030. A device-unique security key is suitably also provided in the E-fuses or downloaded to other non-volatile, difficult-to-alter parts of the cell phone unit 1010.It will be noted that the words "internal" and "external" as applied to a circuit or chip respectively refer to being on-chip or off-chip of the applications processor chip 1022. All items are assumed to be internal to an apparatus (such as a handset, base station, access point, gateway, PC, or other apparatus) except where the words "external to" are used with the name of the apparatus, such as "external to the handset."ROM 1032 provides a boot storage having boot code that is executable in at least one type of boot sequence. One or more of RAM 1034, internal flash 1036, and external flash 1024 are also suitably used to supplement ROM 1032 for boot storage purposes.Secure Demand Paging SDP circuitry 1040 effectively multiplies the memory space that secure applications can occupy. Processor 1030 is an example of circuitry coupled to the Secure Demand Paging block 1040 to execute a process defined by securely stored code and data from a Secure RAM 1034 as if the secure RAM were much larger by using SDRAM 1024. As described further herein SDP Circuitry 1040 includes real-estate circuitry for determining which secure RAM memory page to wipe, or make available for a new page of code and/or data for a secure application.FIG. 2 illustrates inventive integrated circuit chips including chips 1100, 1200, 1300, 1400, 1500 for use in the blocks of the communications system 1000 of FIG. 1 . The skilled worker uses and adapts the integrated circuits to the particular parts of the communications system 1000 as appropriate to the functions intended. For conciseness of description, the integrated circuits are described with particular reference to use of all of them in the cellular telephone handsets 1010 and 1010' by way of example.It is contemplated that the skilled worker uses each of the integrated circuits shown in FIG. 2 , or such selection from the complement of blocks therein provided into appropriate other integrated circuit chips, or provided into one single integrated circuit chip, in a manner optimally combined or partitioned between the chips, to the extent needed by any of the applications supported by the cellular telephone base station 1050, personal computer(s) 1070 equipped with WLAN, WLAN access point 1060 and Voice WLAN gateway 1080, as well as cellular telephones, radios and televisions, Internet audio/video content players, fixed and portable entertainment units, routers, pagers, personal digital assistants (PDA), organizers, scanners, faxes, copiers, household appliances, office appliances, combinations thereof, and other application products now known or hereafter devised in which there is desired increased, partitioned or selectively determinable advantages next described.In FIG. 2 , an integrated circuit 1100 includes a digital baseband (DBB) block 1110 that has a RISC processor (such as MIPS core, ARM processor, or other suitable processor) and a digital signal processor such as from the TMS320C55x™ DSP generation from Texas Instruments Incorporated or other digital signal processor (or DSP core) 1110, communications software and security software for any such processor or core, security accelerators 1140, and a memory controller. Security accelerators block 1140 provide additional computing power such as for hashing and encryption that are accessible, for instance, when the integrated circuit 1100 is operated in a security level enabling the security accelerators block 1140 and affording types of access to the security accelerators depending on the security level and/or security mode. The memory controller interfaces the RISC core and the DSP core to Flash memory and SDRAM (synchronous dynamic random access memory). On chip RAM 1120 and on-chip ROM 1130 also are accessible to the processors 1110 for providing sequences of software instructions and data thereto. A security logic circuit 1038 of FIGS. 1 and 2 has a secure state machine (SSM) to provide hardware monitoring of any tampering with security features. Secure Demand Paging (SDP) circuit 1040 of FIGS. 1 and 2 is provided and described further herein.Digital circuitry 1150 on integrated circuit 1100 supports and provides wireless interfaces for any one or more of GSM, GPRS, EDGE, UMTS, and OFDMA/MIMO (Global System for Mobile communications, General Packet Radio Service, Enhanced Data Rates for Global Evolution, Universal Mobile Telecommunications System, Orthogonal Frequency Division Multiple Access and Multiple Input Multiple Output Antennas) wireless, with or without high speed digital data service, via an analog baseband chip 1200 and GSM/CDMA transmit/receive chip 1300. Digital circuitry 1150 includes ciphering processor CRYPT for GSM ciphering and/or other encryption/decryption purposes. Blocks TPU (Time Processing Unit real-time sequencer), TSP (Time Serial Port), GEA (GPRS Encryption Algorithm block for ciphering at LLC logical link layer), RIF (Radio Interface), and SPI (Serial Port Interface) are included in digital circuitry 1150.Digital circuitry 1160 provides codec for CDMA (Code Division Multiple Access), CDMA2000, and/or WCDMA (wideband CDMA or UMTS) wireless suitably with HSDPA/HSUPA (High Speed Downlink Packet Access, High Speed Uplink Packet Access) (or 1xEV-DV, lxEV-DO or 3×EV-DV) data feature via the analog baseband chip 1200 and RF GSM/CDMA chip 1300. Digital circuitry 1160 includes blocks MRC (maximal ratio combiner for multipath symbol combining), ENC (encryption/decryption), RX (downlink receive channel decoding, de-interleaving, viterbi decoding and turbo decoding) and TX (uplink transmit convolutional encoding, turbo encoding, interleaving and channelizing.). Block ENC has blocks for uplink and downlink supporting confidentiality processes of WCDMA.Audio/voice block 1170 supports audio and voice functions and interfacing. Speech/voice codec(s) are suitably provided in memory space in audio/voice block 1170 for processing by processor(s) 1110. An applications interface block 1180 couples the digital baseband chip 1100 to an applications processor 1400. Also, a serial interface in block 1180 interfaces from parallel digital busses on chip 1100 to USB (Universal Serial Bus) of PC (personal computer) 1070. The serial interface includes UARTs (universal asynchronous receiver/transmitter circuit) for performing the conversion of data between parallel and serial lines. Chip 1100 is coupled to location-determining circuitry 1190 for GPS (Global Positioning System). Chip 1100 is also coupled to a USIM (UMTS Subscriber Identity Module) 1195 or other SIM for user insertion of an identifying plastic card, or other storage element, or for sensing biometric information to identify the user and activate features.In FIG. 2 , a mixed-signal integrated circuit 1200 includes an analog baseband (ABB) block 1210 for GSM/GPRS/EDGE/UMTS/HSDPA/HSUPA which includes SPI (Serial Port Interface), digital-to-analog/analog-to-digital conversion DAC/ADC block, and RF (radio frequency) Control pertaining to GSM/GPRS/EDGE/UMTS/HSDPA/HSUPA and coupled to RF (GSM etc.) chip 1300. Block 1210 suitably provides an analogous ABB for CDMA wireless and any associated 1xEV-DV, 1×EV-DO or 3xEV-DV data and/or voice with its respective SPI (Serial Port Interface), digital-to-analog conversion DAC/ADC block, and RF Control pertaining to CDMA and coupled to RF (CDMA) chip 1300.An audio block 1220 has audio I/O (input/output) circuits to a speaker 1222, a microphone 1224, and headphones (not shown). Audio block 1220 has an analog-to-digital converter (ADC) coupled to the voice codec and a stereo DAC (digital to analog converter) for a signal path to the baseband block 1210 including audio/voice block 1170, and with suitable encryption/decryption activated.A control interface 1230 has a primary host interface (I/F) and a secondary host interface to DBB-related integrated circuit 1100 of FIG. 2 for the respective GSM and CDMA paths. The integrated circuit 1200 is also interfaced to an I2C port of applications processor chip 1400 of FIG. 2 . Control interface 1230 is also coupled via access arbitration circuitry to the interfaces in circuits 1250 and the baseband 1210.A power conversion block 1240 includes buck voltage conversion circuitry for DC-to-DC conversion, and low-dropout (LDO) voltage regulators for power management/sleep mode of respective parts of the chip regulated by the LDOs. Power conversion block 1240 provides information to and is responsive to a power control state machine between the power conversion block 1240 and circuits 1250.Circuits 1250 provide oscillator circuitry for clocking chip 1200. The oscillators have frequencies determined by one or more crystals. Circuits 1250 include a RTC real time clock (time/date functions), general purpose I/O, a vibrator drive (supplement to cell phone ringing features), and a USB On-The-Go (OTG) transceiver. A touch screen interface 1260 is coupled to a touch screen XY 1266 off-chip.Batteries such as a lithium-ion battery 1280 and backup battery provide power to the system and battery data to circuit 1250 on suitably provided separate lines from the battery pack. When needed, the battery 1280 also receives charging current from a Battery Charge Controller in analog circuit 1250 which includes MADC (Monitoring ADC and analog input multiplexer such as for on-chip charging voltage and current, and battery voltage lines, and off-chip battery voltage, current, temperature) under control of the power control state machine.In FIG. 2 an RF integrated circuit 1300 includes a GSM/GPRS/EDGE/UMTS/CDMA RF transmitter block 1310 supported by oscillator circuitry with off-chip crystal (not shown). Transmitter block 1310 is fed by baseband block 1210 of chip 1200. Transmitter block 1310 drives a dual band RF power amplifier (PA) 1330. On-chip voltage regulators maintain appropriate voltage under conditions of varying power usage. Off-chip switchplexer 1350 couples wireless antenna and switch circuitry to both the transmit portion 1310, 1330 and the receive portion next described. Switchplexer 1350 is coupled via band-pass filters 1360 to receiving LNAs (low noise amplifiers) for 850/900MHz, 1800MHz, 1900MHz and other frequency bands as appropriate. Depending on the band in use, the output of LNAs couples to GSM/GPRS/EDGE/UMTS/CDMA demodulator 1370 to produce the I/Q or other outputs thereof (in-phase, quadrature) to the GSM/GPRS/EDGE/UMTS/CDMA baseband block 1210.Further in FIG. 2 , an integrated circuit chip or core 1400 is provided for applications processing and more off-chip peripherals. Chip (or core) 1400 has interface circuit 1410 including a high-speed WLAN 802.11a/b/g interface coupled to a WLAN chip 1500. Further provided on chip 1400 is an applications processing section 1420 which includes a RISC processor (such as MIPS core, ARM processor, or other suitable processor), a digital signal processor (DSP) such as from the TMS320C55x™ DSP generation from Texas Instruments Incorporated or other digital signal processor, and a shared memory controller MEM CTRL with DMA (direct memory access), and a 2D (two-dimensional display) graphic accelerator. Speech/voice codec functionality is suitably processed in chip 1400, in chip 1100, or both chips 1400 and 1100.The RISC processor and the DSP in section 1420 have access via an on-chip extended memory interface (EMIF/CF) to off-chip memory resources 1435 including as appropriate, mobile DDR (double data rate) DRAM, and flash memory of any of NAND Flash, NOR Flash, and Compact Flash. On chip 1400, the shared memory controller in circuitry 1420 interfaces the RISC processor and the DSP via an on-chip bus to on-chip memory 1440 with RAM and ROM. A 2D graphic accelerator is coupled to frame buffer internal SRAM (static random access memory) in block 1440. A security block 1450 in security logic 1038 of FIG 1 includes secure hardware accelerators having security features and provided for secure demand paging 1040 as further described herein and for accelerating encryption and decryption. A random number generator RNG is provided in security block 1450. Among the Hash approaches are SHA-1 (Secured Hashing Algorithm), MD2 and MD5 (Message Digest version #). Among the symmetric approaches are DES (Digital Encryption Standard), 3DES (Triple DES), RC4 (Rivest Cipher), ARC4 (related to RC4), TKIP (Temporal Key Integrity Protocol, uses RC4), AES (Advanced Encryption Standard). Among the asymmetric approaches are RSA, DSA, DH, NTRU, and ECC (elliptic curve cryptography). The security features contemplated include any of the foregoing hardware and processes and/or any other known or yet to be devised security and/or hardware and encryption/decryption processes implemented in hardware or software.Security logic 1038 of FIG. 1 and FIG. 2 (1038, 1450) includes hardware-based protection circuitry, also called security monitoring logic or a secure state machine 2060 of FIG. 3 . Security logic 1038 is coupled to and monitors busses and other parts of the chip for security violations and protects and isolates the protected areas. Security logic 1038 makes secure ROM space inaccessible, makes secure RAM and register space inaccessible and establishes any other appropriate protections to additionally foster security. In one embodiment such a software jump from Flash memory to secure ROM, for instance, causes a security violation wherein, for example, the security logic 1038 produces an automatic immediate reset of the chip. In another embodiment, such a jump causes the security monitoring logic to produce an error message and a re-vectoring of the jump away from secure ROM. Other security violations would include attempted access to secure register or RAM space.On-chip peripherals and additional interfaces 1410 include UART data interface and MCSI (Multi-Channel Serial Interface) voice wireless interface for an off-chip IEEE 802.15 ("Bluetooth" and high and low rate piconet and personal network communications) wireless circuit 1430. Debug messaging and serial interfacing are also available through the UART. A JTAG emulation interface couples to an off-chip emulator Debugger for test and debug. Further in peripherals 1410 are an I2C interface to analog baseband ABB chip 1200, and an interface to applications interface 1180 of integrated circuit chip 1100 having digital baseband DBB.Interface 1410 includes a MCSI voice interface, a UART interface for controls, and a multi-channel buffered serial port (McBSP) for data. Timers, interrupt controller, and RTC (real time clock) circuitry are provided in chip 1400. Further in peripherals 1410 are a MicroWire (u-wire 4 channel serial port) and multi-channel buffered serial port (McBSP) to Audio codec, a touch-screen controller, and audio amplifier 1480 to stereo speakers. External audio content and touch screen (in/out) and LCD (liquid crystal display) are suitably provided. Additionally, an on-chip USB OTG interface couples to off-chip Host and Client devices. These USB communications are suitably directed outside handset 1010 such as to PC 1070 (personal computer) and/or from PC 1070 to update the handset 1010.An on-chip UART/IrDA (infrared data) interface in interfaces 1410 couples to off-chip GPS (global positioning system block cooperating with or instead of GPS 1190) and Fast IrDA infrared wireless communications device. An interface provides EMT9 and Camera interfacing to one or more off-chip still cameras or video cameras 1490, and/or to a CMOS sensor of radiant energy. Such cameras and other apparatus all have additional processing performed with greater speed and efficiency in the cameras and apparatus and in mobile devices coupled to them with improvements as described herein. Further in FIG. 2 , an on-chip LCD controller and associated PWL (Pulse-Width Light) block in interfaces 1410 are coupled to a color LCD display and its LCD light controller off-chip.Further, on-chip interfaces 1410 are respectively provided for off-chip keypad and GPIO (general purpose input/output). On-chip LPG (LED Pulse Generator) and PWT (Pulse-Width Tone) interfaces are respectively provided for off-chip LED and buzzer peripherals. On-chip MMC/SD multimedia and flash interfaces are provided for off-chip MMC Flash card, SD flash card and SDIO peripherals.In FIG. 2 , a WLAN integrated circuit 1500 includes MAC (media access controller) 1510, PHY (physical layer) 1520 and AFE (analog front end) 1530 for use in various WLAN and UMA (Unlicensed Mobile Access) modem applications. PHY 1520 includes blocks for Barker coding, CCK, and OFDM. PHY 1520 receives PHY Clocks from a clock generation block supplied with suitable off-chip host clock, such as at 13, 16.8, 19.2, 26, or 38.4MHz. These clocks are compatible with cell phone systems and the host application is suitably a cell phone or any other end-application. AFE 1530 is coupled by receive (Rx), transmit (Tx) and CONTROL lines to WLAN RF circuitry 1540. WLAN RF 1540 includes a 2.4 GHz (and/or 5GHz) direct conversion transceiver, or otherwise, and power amplifer and has low noise amplifier LNA in the receive path. Bandpass filtering couples WLAN RF 1540 to a WLAN antenna. In MAC 1510, Security circuitry supports any one or more of various encryption/decryption processes such as WEP (Wired Equivalent Privacy), RC4, TKIP, CKIP, WPA, AES (advanced encryption standard), 802.11i and others. Further in WLAN 1500, a processor comprised of an embedded CPU (central processing unit) is connected to internal RAM and ROM and coupled to provide QoS (Quality of Service) IEEE 802.11e operations WME, WSM, and PCF (packet control function). A security block in WLAN 1500 has busing for data in, data out, and controls interconnected with the CPU. Interface hardware and internal RAM in WLAN 1500 couples the CPU with interface 1410 of applications processor integrated circuit 1400 thereby providing an additional wireless interface for the system of FIG. 2 . Still other additional wireless interfaces such as for wideband wireless such as IEEE 802.16 "WiMAX" mesh networking and other standards are suitably provided and coupled to the applications processor integrated circuit 1400 and other processors in the system.Further described next are improved secure circuits, structures and processes and improving the systems and devices of FIGS. 1 and 2 with them.FIG. 3 illustrates an advantageous form of software modes and architecture 2000 for the integrated circuits 1100 and 1400. Encrypted secure storage 2010 and a file system 2020 provide storage for this arrangement. Selected contents or all contents of encrypted secure storage 2010 are further stored in a secure storage area 2025.Next a secure mode area of the architecture is described. In a ROM area of the architecture 2000, secure ROM code 2040 together with secure data such as cryptographic key data are manufactured into an integrated circuit such as 1100 or 1400 including processor circuitry. Also a secure RAM 2045 is provided. Secret data such as key data is copied or provided into secure RAM 2045 as a result of processing of the Secure ROM Code 2040. Further in the secure mode area are modules suitably provided for RNG (Random Number Generator), SHA-1/MD5 hashing software and processes, DES/3DES (Data Encryption Standard single and triple-DES) software and processes, AES (Advanced Encryption Standard) software and processes, and PKA (Private Key Authentication) software and processes.Further in FIG. 3 , secure demand paging SDP 1040 hardware and/or software effectively increases Secure RAM 2045 by demand paging from secure storage 2010. A hardware-implemented secure state machine (SSM) 2060 monitors the buses, registers, circuitry and operations of the secure mode area of the architecture 2000. In this way, addresses, bits, circuitry inputs and outputs and operations and sequences of operations that violate predetermined criteria of secure operation of the secure mode area are detected. SSM 2060 then provides any or all of warning, denial of access to a space, forcing of reset and other protective measures. Use of independent on-chip hardware for SSM 2060 advantageously isolates its operations from software-based attacks. SSM 2060 is addressable and configurable to enable a Hashing module, enable an Encryption/Decryption module, and lock Flash and DRAM spaces.SSM 2060 monitors busses and other hardware blocks, pin boundary and other parts of the chip for security violations and protects and isolates the protected areas. SSM 2060 makes secure ROM and register space inaccessible, and secure RAM space inaccessible and establishes any other appropriate protections to additionally foster security. In one embodiment such a software jump from flash to secure ROM, for instance, causes a security violation wherein, for example, SSM 2060 produces an automatic immediate reset of the chip. In another embodiment, such a jump causes the security monitoring logic to produce an error message and a re-vectoring of the jump away from secure ROM. Other security violations would include attempted access to reconfigure the SSM 2060 or attempted access to secure RAM space.In FIG. 3 , a kernel mode part of the software architecture includes one or more secure environment device drivers 2070. Driver 2070 of FIG. 3 suitably is provided as a secure environment device driver in kernel mode.Further in FIG. 3 , a user application 2080 communicates to and through a secure environment API (application peripheral interface) software module 2085 to the secure environment device driver 2070. Both the user app 2080 and API 2085 are in a user mode part of the software architecture.A protected application 2090 provides an interface as security may permit, to information in file system 2020, secure storage 2025, and a trusted library 2095 such as an authenticated library of software for the system.Turning to FIG. 4 , a Secure Demand Paging (SDP) 1040 secure hardware and software mechanism desirably has efficient page wiping for replacement in internal Secure RAM 1034 of physical pages not currently or often used by the software application, such as protected application 2090. Such pages include pages that may or may not need to be written back to external or other memory.An SDP 1040 hardware and software process efficiently governs the finding of appropriate pages to wipe and various embodiments confer different mixes of low complexity, low memory space and chip real-estate space occupancy, and low time consumption, low power consumption and low processing burden. The quality of the choice of the page to wipe out for replacement is advantageously high. "Wipe" includes various alternatives to overwrite, erase, simply change the state of a page-bit that tags or earmarks a page, and other methods to free or make available a page space or slot for a new page.A hardware-based embodiment efficiently identifies the appropriate page to wipe and applies further efficient SDP swap and other structures and operations. In this embodiment, a hardware mechanism monitors different internal RAM pages used by the SDP software mechanism. The hardware mechanism also detects and flags via registers accessible by software, which page is Dirty (modified) or Clean (unmodified). (A Write access to a page makes it become Dirty.)This embodiment also computes according to the ordered Read and Write accesses that occurred on the different pages, statistical information about the internal RAM page Usage Level. Usage Level is divided into Very Low usage, Low usage, Medium Usage, and High Usage, for instance.SDP 1040 then computes from all the information, according to an embedded sorting process, which pages are the more suitable pages to be wiped. SDP 1040 variously considers, for example, impact of each page on the current application and the time required for a page to be wiped out. Wiping a low usage page impacts the application slightly, but a higher usage page is needed by the application more. A Dirty page consumes writeback time to external memory and a Clean page does not need to be written back. SDP 1040, in one example, prioritizes the pages that are more suitable to be wiped out for less time consumption and application impact in the following priority order:CODE page tagged as VERY LOW usageCODE page tagged as LOW usageDATA READ page tagged as VERY LOW usageDATA READ page tagged as LOW usageDATA WRITE page tagged as VERY LOW usageDATA WRITE page tagged as LOW usageCODE page tagged as MEDIUM usageDATA READ page tagged as MEDIUM usageDATA WRITE page tagged as MEDIUM usageThen the process logs the results to prognostic registers such as page counters described hereinbelow. Subsequently, the SDP software mechanism just reads the prognostic registers to find the best pages to wipe.In the case of a strong security embodiment, the SDP 1040 hardware and/or software just described herein is configured and accessed by the main processing unit in Secure Mode, or highly privileged modes without impact on the main processing unit functionality. Restrictions on Secure Mode and privilege are removed in whole or in part for less secure embodiments. Some embodiments make demand paging itself more efficient without an SSM 2060. Other embodiments provide security features that together with the improved demand paging provide a Secure Demand Pager or SDP.Some embodiments improve very significantly the page selection mechanism with regard to competing demands of time and power consumption, and the quality of the choice of the page to wipe out for replacement.Some embodiments generate automatically and with little or no time overhead the dirty page status and the best page to wipe.Hardware-based embodiments are often more resistant to tampering by software running in other processor modes besides Secure or Privileged Modes. That is, such embodiments are less sensitive to Denial of Service (DoS) attack on an internal mechanism which might force a software application not to run properly.Some embodiments having Dirty page status generating circuits further detect whether Code pages used in internal RAM have been modified by an attacker. This capability contributes to the security robustness of SDP paging methods.Any demand paging system, whether secure or not, can be improved according to the teachings herein, with benefits depending on relative system Swap Out times and Swap In times, and also systems wherein the access time mix of various types of external storage devices from which even the Swap In times to on-chip RAM vary, and other factors. The improvements taught herein are of benefit in a Secure Demand Paging system with Swaps between on-chip RAM and off-chip DRAM, for instance, because Swap Out is used for modified pages and not used for unmodified pages and in some systems the Swap Out time with encryption and/or hashing adds relative to the Swap In time is greater than the Swap Out time would be in a less-secure system lacking the encryption and/or hashing.Various embodiments are implemented in any integrated circuit manufacturing process such as different types of CMOS (complementary metal oxide semiconductor), SOI (silicon on insulator), SiGe (silicon germanium), and with various types of transistors such as single-gate and multiple-gate (MUGFET) field effect transistors, and with single-electron transistors and other structures. Embodiments are easily adapted to any targeted computing hardware platform supporting or not supporting a secure execution mode, such as UNIX workstations and PC-desktop platforms.FIGS. 4 and 8 depict external storage SDRAM 1024 and secure Swapper of 4K pages being Swapped In and Swapped Out of the secure environment. A process of the structure and flow diagram of FIG. 4 suitably executes inside the secure environment as an integral part of the SDP manager code. Note that many pages illustrated in the SDP 1040 are held or stored in the external SDRAM 1024 and greatly increase the effective size of on-chip secure memory 1034.The SDP 1040 has a pool of pages that are physically loaded with data and instructions taken from a storage memory that is suitably encrypted (or not) external to the secure mode. SDP 1040 creates virtual memory in secure mode and thus confers the advantages of execution of software that far exceeds (e.g., up to 4 Megabytes or more in one example) the storage space in on-chip Secure RAM.In FIG. 4 , Secure RAM 1034 stores a pool of 4K pages, shown as a circular data structure in the illustration. The pool of pages in Secure RAM 1034 is updated by the SDP according to Memory Management Unit (MMU) page faults resulting from execution of secure software currently running on the system.In FIG. 4 , a processor such as an RISC processor, has a Memory Management Unit MMU with Data Abort and Prefetch Abort outputs. The processor runs SDP Manager code designated Secure Demand Paging Code in FIG. 4 . The SDP Manager is suitably fixed in a secure storage of the processor and need not be swapped out to an insecure area. See coassigned, co-filed application U.S. non-provisional patent application TI-38213 "Methods, Apparatus, and Systems for Secure Demand Paging and Other Paging Operations for Processor Devices".At left, Protected Applications (PAs) occupy a Secure Virtual Address Space 2110 having Virtual Page Slots of illustratively 4K each. In this way, a Secure Virtual Memory (SVM) is established. Secure Virtual Address Space 2110 has Code pages I,J,K; Data pages E,F,G; and a Stack C. The Secure Virtual Address Space as illustrated has a Code page K and a Data page G which are respectively mapped to physical page numbers 6 and 2 in MMU Mapping Tables 2120, also designated PA2VA (physical address to virtual address). In some embodiments, the PA has its code secured by PKA (public key acceleration).Some embodiments have MMU Mapping Table 2120 in block MMU of FIG. 4 that have Page Table Entries (PTEs) of 32 bits each, for instance. In operation, the PA (Protected Application) and the MMU Mapping Table 2120 are maintained secure on-chip. In other embodiments, a Physical-Address-to-Virtual-Address table PA2VA 2120 provided for SDP 1040 has PTEs pertaining specifically to pages stored in Secure RAM and as illustrated in FIG. 4 .One of the bits in a PTE is a Valid/Invalid bit (also called an Active bit ACT[N] herein) illustrated with zero or one for Invalid (I) or Valid (V) entries respectively. An Invalid (I) bit state in ACT[N] or in the MMU Mapping Table for a given page causes an MMU page fault or interrupt when a virtual address is accessed corresponding to a physical address in that page which is absent from Secure RAM.Further in FIG. 4 , a hardware arrangement is located in, associated with, or under control of a RISC processor. The RISC processor has an MMU (memory management unit) that has data abort and/or prefetch abort operations. The hardware supports the secure VAS (virtual address space) and includes a Secure static RAM. The Secure RAM is illustrated as a circular data structure, or revolving scavengeable store, with physical pages 1, 2, 3, 4, 5, 6. Stack C is swapped into physical page 5 of Secure SRAM, corresponding with the previously-mentioned Page Table Entry 5 for Stack C in the MMU Mapping Tables. Similarly, Code K is swapped into physical page 6 of Secure SPAM, corresponding with the previously-mentioned Page Table Entry 6 for Code K in the MMU Mapping Tables.Associated with the Secure RAM is a Secure Swapper 2160. Secure Swapper 2160 is illustrated in FIGS. 5-8 and has secure Direct Memory Access (DMA) that feeds AES (encryption) and SHA (hashing) hardware accelerators. The secure swapping process and hardware protect the PA information at all times.In FIG. 4 , coupled to the Secure Swapper DMA is a non-secure DRAM 1024 holding encrypted and authenticated pages provided by SDP secure swapper 2160. The DRAM pages are labeled pages A, B, C(mapped to physical page 5), D, E, F, G (mapped to physical page 2), H, I, J, K (mapped to physical page 6), and L.SDP hardware provides secure page swapping, and the virtual address mapping process is securely provided under Secure Mode. Code and Data for SDP Manager software are situated in Secure RAM in a fixed PPA (primary protected application) memory address space from which swapping is not performed. Execution of code sequences 2150 of the SDP Code control Secure Swapper 2160. For example, a High Level Operating System (HLOS) calls code to operate Public Key Acceleration (PKA) or secure applet. The PKA is a secure-state application (PA) that is swapped into Secure RAM as several pages of PKA Code, Data and Stack.In FIG. 4 , a number N-1 Valid Bits exist in the page entries of the MMU Mapping Tables 2120 at any one time because of a number N (e.g. six in the illustration) of available Secure RAM 1034 pages. In some embodiments, one spare page is suitably kept or maintained for performance reasons. Page Data is copied, swapped, or ciphered securely to and from the DRAM 1024 to allow the most efficient utilization of expensive Secure RAM space. Secure RAM pages are positioned exactly at the virtual address positions where they are needed, dynamically and transparently in the background to PAs.In FIG. 4 , SDP software coherency with the hardware is maintained by the MMU so that part of the software application is virtually mapped in a Secure OS (Operating System) virtual machine context VMC according to Virtual Mapping 2120. In this example, the VMC is designated by entries "2" in a column of PA2VA. If a context switch is performed, then the VMC entries in PA2VA are changed to a new VMC identification number. The part of the software application is that part physically located in the Secure RAM and has a Physical Mapping 2120 according to a correspondence of Virtual Pages of Virtual Mapping 2110 to respective physical pages of the Physical Mapping 2120.The information representing this correspondence of Virtual Mapping to Physical Mapping is generated by the MMU and stored in internal buffers of the MMU.The virtual space is configured by the MMU, and the DRAM 1024 is physically addressed. Some embodiments use a single translation vector or mapping PA2VA from the virtual address space to physical address space according to a specific mapping function, such as by addition (+) by itself or concatenated with more significant bits (MSB), given as Virtual_address_space = phy_address_space + x ,where x is an MSB offset in an example 4GBytes memory range [0 : 4GB] + y,and where y is an LSB offset between the virtual address and the physical address in Secure RAM.In FIG. 4 the scavenging process puts a new page in a location in physical Secure RAM 1034 space depending on where a previous page is swapped out. Accordingly, in Secure RAM space, the additional translation table PA2VA 2120 is provided to provide an LSB address offset value to map between the virtual address and the physical address in Secure RAM. MSB offsets x are stored in a VMC_MMU_TABLE in Secure RAM.In some mixed-memory embodiments DRAM 1024 has enough shorter access time or lower power usage than Flash memory to justify loading and using DRAM 1024 with pages that originally reside in Flash memory. In other embodiments, SDP swaps in PA from Flash memory for read-only pages like PA code pages and the PA is not copied to DRAM. In still other embodiments, parts of the PA are in Flash memory and other parts of the PA are copied into DRAM 1024 and accessed from DRAM 1024. Accordingly, a number of embodiments accommodate various tradeoffs that depend on, among other things, the relative economics and technology features of various types of storage.In establishing Mappings 2110 and 2120 and the correspondence therebetween, the following coherency matters are handled by SDP.When loading a new page into Page Slot N in Secure RAM as described in FIG. 5 , the previous Virtual to Physical mapping is no longer coherent. The new page corresponds to another part of the source application. The Virtual Mapping 2110 regarding the Swapped Out previous page N is obsolete regarding Page N. Entries in the MMU internal buffers representing the previous Virtual to Physical Mapping correspondence are now invalidated. An access to that Swapped Out page generates a Page Fault signal.Also, entries in an instruction cache hierarchy at all levels (e.g. L1 and L2) and in a data cache hierarchy at all levels are invalidated to the extent they pertain to the previous Virtual to Physical Mapping correspondence. Accordingly, a Swapped Out code page is handled for coherency purposes by an instruction cache range invalidation relative to the address range of the Code page. A Data page is analogously handled by a data cache range invalidation operation relative to the address range of the Data page. Additionally, for loading Code pages, a BTAC (Branch Target Address Cache or Branch Target Buffer BTB) flush is executed at least in respect of the address tags in the page range of a wiped Code page, in order to avoid taking a predicted branch to an invalidated address.When wiping out a page from Secure RAM, some embodiments Code pages that are always read-only. Various of these embodiments distinguish between Data (Read/Write) pages and Code (Read Only) pages. If the page to wipe out is a Data page, then to maintain coherency, two precautions are executed. First, the Data cache range is made clean (dirty bit reset) in the range of addresses of the Data Page. Second, the Write Buffer is drained so that any data retained in the data caches (L1/L2) are written and posted writes are completed. If the page to wipe out is a Code page, the wiping process does not need to execute the just-named precautions because read-only Code pages were assumed in this example. If Code pages are not read-only, then the precautions suitably are followed.The SDP paging process desirably executes as fast as possible when wiping pages. Intelligent page choice reduces or minimizes the frequency of unnecessary page wipes or Swaps since an intelligent page choice procedure as disclosed herein leaves pages in Secure RAM that are likely to be soon used again. Put another way, if a page were wiped from Secure RAM that software is soon going to use again, then SDP would consume valuable time and power to import the same page again.An additional consideration in the SDP paging process is that the time consumption for wiping pages varies with Type of page. For example, suppose a Code Page is not required to be written back to the external memory because the Code Page is read-only and thus has not been modified. Also, a Data Page that has not been modified does not need to be written back to the external memory. By contrast, a Data Page that has been modified is encrypted and hashed and written back to the external memory as described in connection with FIG. 6 .FIG. 5 depicts SDP hardware and an SDP process 2200 when importing a new page from SDRAM 1024. Consider an encrypted application in the SDRAM 1024. The description here equally applies to Code pages and Data pages. A step 2210 operates so that when a new page is needed by a processor and that page is missing from Secure RAM 1034, then that page is read from an application source location in the SDRAM 1024. Next a step 2220 performs a Secure DMA (Direct Memory Access) operation to take the new page and transfer the new page to a decryption block 2230. In a step and structure 2240, the decryption block 2230 executes decryption of the page by AES (Advanced Encryption Standard) or 3DES (Triple Data Encryption Standard) or other suitable decryption process. As the AES/3DES accelerator 2230 is decrypting the content, the output of the AES/3DES accelerator 2230 is taken by another Secure DMA operation in a step 2250.Then, in FIG. 5 , Secure DMA overwrites a wiped Secure RAM page with the new page, e.g., at page position Page4 in the Secure RAM 1034. Further, Secure DMA in a step 2260 takes the new page from Secure RAM 1034 and transfers the new page in a step 2270 to a hashing accelerator 2280 in process embodiments that authenticate pages. The hashing accelerator 2280 calculates the hash of the new page by SHA1 hashing or other suitable hashing process to authenticate the page. A comparison structure and step 2285 compares the page hash with a predetermined hash value. If the page hash fails to match the predetermined hash value, the page is wiped from Secure RAM in a step 2290, or alternatively not written to Secure RAM 1034 in step 2250 until the hash authentication is successful. If the page hash matches the predetermined hash value for that page, the page remains in Secure RAM, or alternatively is written to Secure RAM by step 2250, and the page is regarded as successfully authenticated. A suitable authentication process is used with a degree of sophistication commensurate with the importance of the application.FIG. 6 depicts an SDP process 2300 of wiping out and Swapping Out a page. The SDRAM, Secure RAM, Secure DMA, encryption/decryption accelerator 2330, and hashing accelerator 2390 are the same as in FIG. 5 , or provided as additional structures analogous to those in FIG. 5 . The process steps are specific to the distinct SDP process of wiping out a page such as Page4. In a version of the wiping out process 2300, a step 2310 operates Secure DMA to take a page to wipe and Swap Out, e.g., Page 4 from Secure RAM 1034. A step 2320 transfers the page by Secure DMA to the AES/3DES encryption accelerator 2330. Then in a step 2340 the AES/3DES encryption accelerator encrypts the content of the page. Secure DMA takes the encrypted page from AES/3DES encryption accelerator in a succeeding step 2350 and transfers and writes the page into the external SDRAM memory and overwrites the previous page therein. In the process, the wiped out Page4 information may be destroyed in the internal Secure RAM 1034, such as by erasing or by replacement by a replacement page according to the process of FIG. 5 . Alternatively, the Page4 may be wiped out by setting a page-specific bit indicating that Page4 is wiped.In FIG. 6 a further SDP process portion 2360 substitutes for step 2310 the following steps. Secure DMA in a step 2370 takes the page from Secure RAM and transfers the page in a step 2385 to the hashing accelerator 2390 in process embodiments involving authenticated pages. The hashing accelerator 2390 calculates and determines the hash value of the new page by SHA1 hashing or other suitable hashing process. In this way, accelerator 2390 thus provides the hash value that constitutes the predetermined hash value for use by step 2285 of FIG. 5 in looking for a match (or not) to authenticate a page hash of a received Swapped In page. The page content of Page4 and the thus-calculated hash value are then obtained by Secure DMA in a step 2395 whereupon the process continues through previously-described steps 2320, 2330, 2340, 2350 to write the page and hash value to the external memory SDRAM 1024.In FIG. 7 , AES/xDES block encryption/decryption functional architecture includes a System DMA block 2410 coupling Secure RAM 2415 to encryption HWA 2420. A RISC processor 2425 operates Secure Software (S/W) in Secure Mode. On Swap Out, an encrypted data block is supplied to Memory 2430 such as a DRAM, Flash memory or GPIOs (General Purpose Input/Outputs). The decryption process on Swap In is the same as the one described in FIG. 7 but with memory 2430 as data block source and Secure RAM 2415 as data block destination.Now consider the flow of an encrypted Swap Out process executed in FIG. 7 . In a step 2450, RISC processor 2425 in Secure Mode configures the DMA channels defined by Internal registers of System DMA 2410 for data transfer to cryptographic block 2420. Upon completion of the configuration, RISC processor 2425 can go out of secure mode and execute normal tasks. Next, in a step 2460 Data blocks are automatically transferred from Secure RAM via System DMA 2410 and transferred in step 2470 to encryption block 2420 for execution of AES or xDES encryption of each data block. Then in a step 2480, Data blocks are computed by the chosen HWA (hardware accelerator) crypto-processor 2420 and transmitted as encrypted data to System DMA 2410. The process is completed in a step 2490 wherein encrypted Data blocks are transferred by DMA 2410 to memory 2430.In FIG. 8 , SHA1/MD5 Hashing architecture includes the System DMA block 2410 coupling Secure RAM 2415 to Hash HWA 2520. RISC processor 2425 operates Secure Software (S/W) in Secure Mode. System DMA 2410 has Internal Registers fed from the RISC processor. Hash block 2520 has Result registers coupled to the RISC processor. An Interrupt Handler 2510 couples Hash block 2520 interrupt request IRQ to the RISC processor 2425.The flow of a Hash process executed in FIG. 8 is described next. In a step 2550, RISC processor 2425 in Secure Mode configures the DMA channels defined by Internal registers of System DMA 2410 for data transfer to Hash block 2520. Upon completion of the configuration, RISC processor 2425 can go out of secure mode and execute normal tasks. Next, in a step 2560 a Data block is automatically transferred from Secure RAM 2415 via System DMA 2410 and transmitted in step 2570 to Hash block 2420. A hash of the data block is generated by the chosen HWA crypto-processor 2520 by SHA-1 or MD5 or other suitable Hash. In a succeeding step 2580, HWA 2520 signals completion of the Hash by generating and supplying interrupt IRQ to Interrupt Handler 2510. Interrupt Handler 2510 suitably handles and supplies the hash interrupt in a step 2590 to RISC processor 2425. When the interrupt is received, if RISC processor 2425 is not in Secure Mode, then RISC processor 2425 re-enters Secure Mode. The process is completed in a step 2595 wherein RISC processor 2425 operating in Secure Mode gets Hash bytes from Result registers of HWA 2520.The description now turns to FIGS. 9 , 10 and 11 . FIG. 9 shows details of a processor 1030 and SDP 1040. The processor 1030 includes a RISC processor with functional ports. Secure RAM 1034 is coupled via interconnect 2705 to an on-chip Instruction INST Bus and an on-chip DATA Bus. A bus interface 2707 couples the functional ports of the RISC Processor 1030 to the INST and DATA buses. RISC Processor 1030 also has a RISC CPU (central processing unit) coupled to a Peripheral Port block which is coupled in turn via a bus interface 2709 to an on-chip bus 2745.Further in FIG. 9 , SDP circuitry 1040 is coupled to the INST bus, DATA bus, and on-chip bus 2745. SDP circuitry 1040 is under hardware protection of a Secure State Machine (SSM). SDP circuitry 1040 has Dirty Bits Checker and Write Access Finder circuit 2710 to detect modifications to pages, Usage Level Builder circuit 2720, Page Wiping Advisor circuit 2730, and secure register group 2740.Register group 2740 has added secure registers for SDP. These registers ACT, TYPE, WR, STAT, and ADV have bit entries respective to each of the pages 0 to N and are accessible by Secure Supervisor software of SDP. Register group 2740 is coupled to on-chip bus 2740.Dirty Bits Checker and Write Access Finder circuit 2710 is coupled to the DATA bus and coupled to register group 2740.Usage Level Builder circuit 2720 has a first block Instruction (I) and Read (RD) Access Finder coupled to the INST bus and DATA bus. This circuit detects each instance of RD access to a Code page via INST bus, or RD access to any page via DATA bus.Usage Level Builder 2720 has a second Usage Level Builder block coupled to receive information from Dirty Bits Checker and Write Access Finder 2710 and from the I and RD Access Finder block in circuit 2720. This second block receives page activation bits from the ACT register in register group 2740 and generates Usage Level data.Next, the Usage Level data is coupled to Usage Level Encoder block in circuit 2720. Codes for tiers of Usage Level are fed to the STAT register and to Page Wiping Advisor 2730 Priority Sorting block.In Page Wiping Advisor 2730, the Priority Sorting block is coupled to receive page-specific Type data from the TYPE register. Also, Priority Sorting block is suitably coupled, depending on embodiment to receive Usage Level information from the middle Usage Level Builder block in circuit 2730. Further, Priority Sorting block is suitably coupled to feed back sorting information to that middle Usage Level Builder block.Further in Page Wiping Advisor 2730, Priority Sorting block feeds sorting information as described in FIGS. 10 and 11 to Priority Result block. Priority Result block determines which page(s) has highest priority for wiping and writes this information to the Advice register ADV in register group 2740.The wiping Advice information in Advice register ADV is accessed by RISC Processor 1030 via bus 2745, such as by interface 2709 and Peripheral Port. Based on the information in register group 2740, RISC Processor 1030 executes SDP software to swap out a Dirty Page N identified by ADV[N] register and WR[N] register one bit and swap in a new page, or simply swap in a new page if the wiped page N identified by register bits ADV [N] and WR[N]=0 (zero bit) was a Clean (unmodified) page.Four Process and structure areas are performed in one or more exemplary SDP paging processes and structures.First, the Code Pages are differentiated from the Data Pages by identifying and entering an entry in a page-specific field in the TYPE register respective to each such Code or Data page.Second, write access activity is monitored in a register WR[N] to determine each page most likely to be a READ page in a pool of Data Pages. Register WR[N], in other words, has bits indicating of which pages are Clean (unmodified) or Dirty (modified).Third, the ACT register is loaded with page activate entries and statistical information is built up to populate the STAT register for the activated pages represented in the ACT register.Fourth, the foregoing three types of information are then utilized according to a four-STEP process described next to produce wiping Advice for one or more pages in an ADV register.In FIG. 9 , a process called Wiping Advisor herein operates in one of various alternative ways, and two examples are described in FIGS. 10 and 11 . "&" stands for concatenation and AND means Boolean-And in the text that follows.Registers for use in FIGS. 9 , 10 and 11 are a TYPE Page Type register having entries of zero (0) for each Data Page and one (1) for each Code page. WR register has a respective dirty bit for signs of modification of each secure RAM page. ACT register has a respective entry to activate or de-activate the Usage Level of a page. STAT register holds a respective entry representing one of four levels of ACT for an activated page. ADV register of FIG. 9 is the Wiping Advisor register 2880 of FIG. 10 and has a respective entry for each page wherein one (1) means the recommendation is to wipe the page and zero (0) means not to wipe the page. Sixteen page counters or registers with a subtractor are also provided.The Wiping Advisor has a process with four STEPs ONE, TWO, THREE and FOUR. STEP ONE handles the First, Second and Third Process and structure areas and sets up priority encodings for each page for the Fourth Process and structure area. STEPS TWO, THREE and FOUR complete the Fourth Process and structure area above.STEP ONEFirst Process and structure area: The Code Pages are differentiated from the Data Pages by TYPE[N] according to TABLE 1. Code Pages come from Instruction Cache accesses and Data Pages come from Data Cache accesses, so the access signals to the caches are used to derive and enter the TYPE[N] register bits. Also some software applications explicitly identify which pages are code pages and which pages are data pages. Kernel and/or SDP software may define data stack pages and data heap pages, and such available information is used by some embodiments according to the teachings herein.Suppose the Code or Data page type information is not directly available, because the architecture does not have separate Instruction Cache and Data Cache and the application does not identify the pages. Then the Write access activity is suitably monitored regarding each page in order to determine in a proactive or preemptive way which page is most likely not a Code Page in the pool of Data Pages. If a page is written, then it is probably a Data Page and not a Code Page. The default configuration is that a page is a Data Page so that both read and write access tabulations are encompassed.When Code Pages can be unambiguously identified, then differentiating Code from Data pages also confers control. When Code Pages are identified, security circuitry is suitably provided to automatically prevent Code pages from being modified or hacked on the fly while each Code Page is in Secure RAM. In cases where a Code Page is obtained from the Data Cache, then the page is tabulated as a Data Page unless the application explicitly identifies it as a Code Page. Since Code Pages take less time to wipe in systems wherein Code Pages are read-only (Clean by definition), the Code Pages are assigned somewhat higher priority to wipe in STEP FOUR than pages of similar Usage Level that are modified, for example.TABLE 1 PAGE TYPE BITS0DataPage1CodePageSecond Process and structure area: In FIGS. 9-11 , a register WR codes a field WR[N] where "one" (1) signifies at least one write to page N and zero (0) signifies that the page has not been written. This register field WR[N] implements the time-consumption consideration that a page that has been not been written takes less time to wipe since the page need not be, and suitably is not, written back to external memory. The register field WR[N] is suitably reset to zero (0) by having zero (0) is written in it by the Peripheral Port.TABLE 2 describes the meaning of different values of register field WR[N].TABLE 2: CODES SIGNIFYING WRITE OR NOT TO PAGE N0No write to Page N, called a Read Page1One or more actual writes have occurred to Page N, called a Write Page or Dirty PageIn some embodiments the Second Process and structure area considers the characterization of a page as a Read Page to be a matter of initial assumption that needs to be checked by the circuitry. In this approach, when a page is detected to be potentially a Read Page, then a drain of the Write Buffer and a Clean Data Cache Range (applied only to the respective 4K page being processed) is used to determine if the Read Page assumption was correct. If the Read Page assumption is confirmed, then when the page is selected for wiping, the page is wiped out simply by the ADV register bit entry and/or subsequent overwriting. There is no need to execute a FIG. 6 Swap Out in the meantime by write-back to the external memory. If the Read Page assumption is disconfirmed, then the page is written back to the external memory as described in connection with FIG. 6 .In FIG. 9 , an SSM Dirty Bits Checker 2710 monitors each 4K page N in the Secure RAM and detects any write access to each respective page N. The status of each page N is flagged in the WR[N] bit of register WR. The status of each page N is cleared by the secure demand pager SDP circuitry by writing zeroes into that register WR either from circuit 2710 or from processor 1030 over bus 2745.Some embodiments have write-back cache and other embodiments have write-through cache. Write-through cache may work somewhat more efficiently since an L2 (Level 2) cache can retain substantial amounts of data before a random cache line eviction happens in Write-back mode.Next, various signal designators are used in connection with the SDP coupling to busses. The signal designators are composites build up from abbreviations and interpreted according to the following Glossary Table.ABBREVIATIONREMARKS (also subject to explanation in text)AAddressADDRAddressCLKClockENEnableIInstruction (bus)NPage NumberPROTProtected, SecureRReadREADYReadyRWRead/WriteSECSecureVALIDValidWWriteWRWriteIn FIG. 9 , processor buses INST bus, DATA bus, and bus 2745 have signals READ_CHANNEL (data and instruction fetch load), and WRITE_CHANNEL (data write) signals. These signals are useful to SDP 1040, such as those signals listed below.ACLKMain ClockACLKENUsed to divided the Main clock to create bus clock (generally we have core at 400 MHz and bus clock at 200 MHz)ARVALID: When High the address on the bus is validARPROT: Indicates if this transaction is Secure/Public; User/Supervisor; Data/OpcodeARADDR: Address requestedAWVALID: When High the address and data in the bus are validAWPROT: Indicates if this transaction is Secure/public; User/Supervisor; Data/OpcodeAWADDR: Address requestedSome processor architectures, such as real Harvard architecture, have separate busses for Data Read; Data Write; and Instructions (Opcode Fetch). READY signals AWREADYRW, ARREADYRW, ARREADYI pertain to data-valid signals on different buses. ARREADYI is HIGH when a slave has answered or hand-shaked the read data, indicating data valid, on an Instruction bus to the RISC processor. ARREADYRW is HIGH when a slave has answered or hand-shaked the read data, indicating data valid, on a Data Read bus to the RISC processor. AWREADYI is HIGH when the write data on a Data Write bus is valid, indicating data valid, to a Slave. In various architectures, one bus may carry one, some or all of these types of data, and the appropriate ready signal(s) is provided.The pages are aligned on a 4K bytes boundary. One embodiment example repeatedly operates on all of a set of pages N from 0 (0000 binary) to 15 (1111 binary) and concatenates the four bits representing a page number N (0 to 15) so that all sixteen page addresses PAGE_0_BASE_ADDR, PAGE_1_BASE_ADDR, ... PAGE_15_BASE_ADDR are loaded with a respective base address value START_SECRAM[31:16] concatenated with the four binary bits of index N as the offset from that base address and identifying each respective page N= 0, 1, ...15. Each of these base addresses are respectively designated PAGE_N_BASE_ADDR.Next, the process generates the truth value of the following expression.(AWVALIDRW=1 and AWREADYRW=1 and ACLKENIRW=1 and AWPROTRW[2] =0) .If the expression is not true, then for N=0 to 15, a temporary register for holding information pertaining to each attempted page access has a respective bit zeroed PAGE_N_WR=0 for every page N.If the expression is true, then for the accessed page number N, both the temporary register bit is set PAGE_N_WR=1 and the Dirty/Clean register bit for the accessed page is set WR[N]=1, provided PAGE_N_BASE_ADDR = AWADDRRW[31:12]. The temporary register bits PAGE_N_WR for all the other fifteen pages are zeroed.In words, the SDP hardware 1040 monitors for the instance when not only the high 16 bits of PAGE_N_BASE_ADDR are equal to the high 16 bits of AWADDRRW[31:16], but also the next 4 page-specific bits of PAGE_N_BASE_ADDR are equal to the next 4 page-specific bits AWADDRRW[15:12] signifying the page to which PAGE_N_BASE_ADDR pertains. On Swap In, the high 16 bits are written to PA2VA of FIG. 4 and indexed by the next 4 page-specific bits. A match indicates a Write to Page N, which makes Page N Dirty. When a match happens, the Dirty/Clean register bit WR[N] pertaining to page N is set to one (1) (Dirty). The Dirty/Clean register WR is not modified at any other bit position at this time since Dirty indications for any other page should be remembered and not disturbed.Third Process and structure area: Statistics on frequency of use of each page N are kept as 2-bit values designated STAT in registers 2740 of FIG. 9 and as depicted in FIGS. 10 and 11 . These statistics are examples of prognostic registers or register values. STAT identifies which pages are used more frequently than others so that paging will be less likely to wipe out a more frequently used page. For example, when the coding represents a VERY LOW usage condition for a page, that page is a good candidate for wiping.To conserve real estate, modulo 2 counters are used in an embodiment, as follows. In FIG. 9 , a Usage Level Encoder in Usage Level Builder 2720 encodes Page Counter values according to TABLE 3 so that each of the values is tiered and loaded into a modulo 2 two bit counter called STAT.TABLE 3 shows how the page counter values are compressed into STAT 2-bit values. In this example, and without limitation, sixteen 4K pages N have their two bit statistics recorded in sixteen two-bit entries in a 32-bit STAT register STAT [31:0]. Each two bit entry is designated as STAT [2N+1; 2N] for page number N running from 0 to 15.TABLE 3 STATISTICS CONVERSION000VERY LOW1 to 4701LOW48 to 9510MEDIUM96 to 12711HIGHNote that variation of the boundaries like 48 and 96 is readily made in various embodiments. Ranges of variation for the low boundary (0, 1, 48, 96 in TABLE 3) of each Usage Level are essentially as low or as high as the counter range, depending on the selection by the skilled worker. The high boundary (0, 47, 95) is set one less than the low boundary of the next higher Usage Level, so that all possible counter values are assigned to some Usage Level.The number of Usage Levels, when used, are suitably at least two, without an upper limitation of number of Usage Levels, and ordinarily less than nine for inexpensive circuitry. In this example, four Usage Levels were adopted.The highest counter value (e.g., 127 here) suitably has no upper limit, but most applications will empirically have some counter value below which a predetermined percentage (e.g., 90% for upper counter value being one less than a power-of-two) of processor runs lie. The highest counter value can be increased or decreased according to the number of pages available for SDP in the internal Secure RAM. Then, the counter is suitably established to have that counting capacity. If the counter does reach its hardware upper limit, the counter suitably is made to saturate (remain at the upper limit) rather than rolling over to zero, to avoid confusing Usage Levels with each other. For most implementations, a counter capacity between 31 and 1023 appears practical and/or sufficient.The Usage Levels in this example, divide the Usage Level lower count boundary (e.g., 0, 1, 48, 96) so that all but the lowest Usage Level divide the counter range so that some ranges are approximately equal (here, two ranges are 48 counts wide). Other embodiments set the ranges differently. One way is setting some of the upper or lower range boundaries logarithmically-such as approximately 1, 4, 16, 64. Another approach uses 0, 32, 64, 96, or some of those values, and directly loads the two MSB bits of the counter as a Usage Level to register STAT.Another embodiment determines the ranges to improve execution of known software applications by empirical testing beforehand and then configures the Usage Level Encoder with the empirically determined ranges prior to use. Still another embodiment in effect does the empirical testing in the field and dynamically learns as the applications actually execute in use in the field, and then adjusts the boundaries to cause the wiping Advice to keep the execution time and power dissipation very low.The counting operations help avoid prematurely swapping out a newly swapped-in page. Swap is executed when a page fault occurs, which means an attempted access is made to a missing page. SDP software uses Page Wiping Advisor 2730, in the hardware of FIG. 9 added to Secure State Machine SSM, to identify a page slot when the space in Secure RAM for physical pages is full, and in some embodiments under other conditions as well. If the old page in the identified page slot has been modified, SDP securely swaps out the old page as shown in FIGS. 4 , 6 , 7 and 8 . Then SDP software swaps in a new page in secure mode as shown in FIGS. 4 , 5 , 7 and 8 , and thereby replaces the old page in the page slot.Some embodiments have control software to activate access and respond to the SDP hardware of FIG. 9 . In some embodiments, that control software is suitably provided as a space-efficient software component in Secure ROM that is patch updatable via a signed Primary Protected Application. See coassigned, co-filed application U.S. non-provisional patent application TI-38213 "Methods, Apparatus, and Systems for Secure Demand Paging and Other Paging Operations for Processor Devices".Registers 2740 has WR register used in conjunction with page access counters 2845 of FIG. 10 and 11 , to determine when Page slots with a page currently mapped are Dirty (modified in Secure RAM after Swap In). Often, the Swap In occurs, but with Swap Out of the old page from the slot being omitted. Swap Out is omitted, for instance, for old code pages when self modifying code is not used. The virtual slots (where potential physical pages can be mapped) might change one time for code, and that is when the code is loaded.For a Clean page, the Dirty/Clean WR register information provides a preventive signal to bypass Swap Out and thereby save and avoid the cost of a Swap Out to wipe or steal that page. Then a Swap In is performed into the Page slot occupied by the no-longer-needed Clean page. In other words, Swap In from DRAM of a new page to be accessed writes over an old Clean page residing in a page slot identified by Page Wiping Advisor 2730. Time and processing power are efficiently used in this process embodiment by Swapping Out a page selected for wiping specifically when that page is dirty, prior to the subsequent Swap In of a new page.The data for a secure environment is found to be much smaller in space occupancy than code for many secure applications. Some secure applications like DRM (Digital Rights Management) do use substantial amounts not only of secure code but also secure data such as encrypted data received into non-secure DRAM. The secure environment decrypts encrypted data and puts the resulting decrypted data back into DRAM while keeping the much smaller amount of data represented by key(s) and/or base certificate structure for the DRM in the secure environment. Selective control to Swap Out a page selected for wiping specifically when that page is Dirty, and to apply a bypass control around Swap Out when the wiped page was Clean, still saves time and processing power.Then the page fault error status is released and an access on the previously missing page occurs and is completed, since that page is now swapped in. The SDP hardware of FIG. 9 , when receiving an access, updates its internal counter corresponding to this page with the highest value (e.g., 127), which ranks the page HIGH and consequently this page is given the last or lowest priority to be wiped out at this point.The third process and structure area builds statistical information of usage of each page in order to help the SDP Page Wiping Advisor 2730 to choose the right page to wipe out. The statistical information helps avoid wiping out pages that are currently in use or being used more than the appropriate page to wipe out, all other things being equal.The Usage Level Builder 2720 builds a Usage Level of each page by detecting any read or write access occurring on each page. Some embodiments do not differentiate a burst access from a single access for this detection operation. The SSM sends the access detection to Usage Level Builder 2720. Usage Level Builder 2720 outputs, for example, two (2) bits of statistical information that is encoded based on TABLE 3 to statistics register STAT. Statistics register STAT is accessible to the SDP (secure demand pager) software executing on RISC processor 1030 or other processor.Note that the statistical information provided by the STAT register may not always be precisely accurate due to a high amount of cache hits that might occur in an L2 cache in some cache architecture. However, the information is indicative and sufficiently accurate for the SDP.In FIG. 9 , the SSM Usage Level Builder 2720 has a set of Page Access Counters 2845 of FIGS. 10 and 11 . Those counters include, for example, a seven (7) bit counter (0-127) for each 4K bytes page N. When page N is accessed, the respective Nth page counter is set to 127, and all other page counters are decremented by one. In operation, the counters of the currently accessed pages have higher count values closer or nearer to 127, and no-longer-used pages have counter values close to zero.In other words, the Page Access Counters 2845, in effect, keep a reverse count of non-uses of each page by decrementing down so that a more-unused page has a lower counter value, in one example, than a less-unused page. In this way both recency of access and frequency of use work together, for a page that should not be wiped, to keep the counter value high.The counters are reset by resetting the particular Page Access Counter in counters 2845 that pertains to the particular page slot that is wiped at any given time. That particular Page Access Counter is reset to 127 (all ones), for example, and the corresponding STAT register bit pair is reset to "11" (HIGH) for the particular physical page slot that is wiped. The counts for other pages need not be reset since those counts still are meaningful. For example, another little-used page that has achieved a count that has been repeatedly decremented down a low value, and wherein that page has not yet been wiped, need not have its Page Access Counter value reset to 127 when some other little-used page has just been wiped.Other embodiments suitably use the opposite end of the range and/or other policies to efficiently differentiate pages to wipe from pages not to wipe.An alternative embodiment operates oppositely to that of TABLE 3, and sets the counter for page N to zero each time page N is accessed. The counters for all the other pages are incremented by one. The values of TABLE 3 are reversed in their meaning, and the results of operation are similar. Operations thus establish count values for recently accessed or more frequently used pages nearer to one end of the count range than count values for hardly-used pages.Another alternative embodiment initializes the counters to zero and increments a counter pertaining to a page when that page is accessed to keep a statistic. Counters for pages that were not accessed have their values unchanged in this alternative. Elapsed time from a time-stamp time pertaining to the page is combined with the counter information to indicate frequency of use. Numbers representing tiers of elapsed time and tiers of counter values are suitably combined by logic to indicate frequency of use. A page that has recently been swapped into Secure RAM and therefore has few accesses is thus not automatically given a high priority for wiping just because its usage happens to be low. In other words, recency of entry of a new page is taken into account.In some embodiments, when Secure RAM has empty page slots, the empty page slots are used as Swap In destinations before wiping and/or Swapping Out any currently resident pages.The Page Access Counters 2845 are utilized for tracking code and data pages separately in some embodiments, to bump (i.e., increment or decrement) a respective counter for each page and page type. Some embodiments keep statistics of several counters, e.g., three (3) counters designated NEW, OLD, and OLDER. On each page fault, the three aged counters are updated on a per virtual address slot basis. Counter NEW holds the latest count. Counter OLD gets an old copy of counter NEW. Counter OLDER gets an older copy of counter NEW. In another embodiment, a weighting system is applied to the three aged counters for dynamically adjusting operations of the page wiping adviser. Some embodiments provide such counters representing separate ranges of age of page since last access.Some embodiments provide additional bits to signify saturation and/or roll-over of the counter, and an interrupt is suitably supplied to RISC processor 1030 to signal such condition. The SDP hardware 1040 generates an interrupt each time a page access counter reaches zero (0), and does not wait for the application program running in SDP to generate a page fault and then servicing the page fault. A process determines which new page to import such as by loading a page that is adjacent in virtual address space to a currently-loaded high usage page, pre-decoding from an instruction queue, or other appropriate new page identification mechanism.Interrupt architecture for SDP hardware 1040 thereby obviates continual statistics management monitoring or polling by RISC processor 1030 (also called tight coupling). A still further variant of the aged-counters approach utilizes a secure timer interrupt, in secure mode when SDP is in use, to vary frequency of reading the three aged counters. Thus, a variety of interrupt based SDP embodiments of hardware 1040 are contemplated as well as polling embodiments.A statistics register for each virtual page slot can be provided in some embodiments because the virtual slots (in secure virtual memory) are the relevant address space to operations of applications code. In other embodiments, Page Access Counters 2845 are kept low in number by having them correspond to the secure RAM physical pages, which map to any virtual slot in general. Also, a single counter circuit here coupled to several count value registers helps keep real estate small.Even though the virtual slots are the relevant address space to the application, the physical page hardware statistics are nevertheless efficiently maintained in some embodiments on a physical page-slot by page-slot basis. This approach represents and handles a lot of data when the virtual address space is very large without need of a statistics register for each virtual page slot. In this physical page statistics approach, physical pages are accessed if mapped into some page slot of the virtual address space. The status registers need only track the much smaller number of physical pages.The Page Access Counters 2845 pertain to each physical page in FIG. 10 . The SSM monitors the physical bus and SSM need not be coupled to the MMU mapping. Page Access Counters 2845 ranks the usage of the physical page slot in Secure RAM in order to determine that type (Clean, read; or Dirty, write) and Usage Level of the page by SSM tracking each access going to from MPU to Secure RAM.In FIG. 4 , SDP Manager uses virtual memory contexts VMC to context-associate the physical pages to the much larger number of virtual slots and maintain their relevance and association on a per page-slot basis in the virtual address space. Put another way, the physical page statistics data is instanced to a context (VMC) which is associated to each virtual slot where a physical page, at some point in time, is or has been mapped, and tracked physically what occurred to that physical page, but only while mapped into that slot. When the physical page is wiped and/or Swapped Out, such as to free physical space for a new page, the physical statistics counters are cleared, as described hereinabove, because they are no longer relevant to where the page is newly mapped into the virtual address space. New counts are then added to statistics maintained on a virtual slot basis. Suppose the page wiping adviser circuitry makes decisions based upon what the application does in the virtual address space, and not the physical address space. The physical address space is irrelevant to operations of the application over a longer period of time wherein the physical pages have been dynamically reassigned to many virtual address slots over that longer period of time.The application program micro-operational read/writes to memory are thus tracked by physical page of Secure RAM. Suppose the scavenging decision to wipe a page is based upon virtual page slots (4k each) comprising the entire virtual memory space. Therefore, in the aged counters approach, each page slot (4k) is associated with three different aged counters. Each DRAM backing page, in effect, is supported by these counters, because a linear one to one mapping (an array) relates DRAM backing pages to slots in the virtual address space and thus reduces complexity. Those counters are suitably kept and maintained encrypted in the non-secure DRAM, as part of the other DRAM backing page statistics.Another category of embodiments considers in a greater SDP context the history of what an application program accesses in the larger virtual address space. For these embodiments, the history in virtual address space is regarded as more important than what the application does in the physical address space (pages mapped into slots of the virtual) when relating to scavenging operations. A page that has not been dirtied (modified) since the last Swap Out of data in the virtual slot where that page is currently mapped is a more efficient candidate for stealing than a dirty page. Statistics maintained on access to a given physical page are important, when related to the context of where the physical page is mapped into the virtual address space. That is because the applications actions/accesses with its underlying memory are in the virtual address space, not the physical space which is being used dynamically in a time-sliced manner to create the larger virtual address space. Therefore, if hardware like that of FIG. 9 monitors accesses to physical pages to produce its statistics, a further variation keeps and relates the statistics on physical pages in the larger context of information relating to the virtual address space. Accordingly, before the page is wiped or stolen and moved to a different supporting virtual address slot, the information is retrieved from hardware statistics register STAT and saved into a software maintained statistics data structure that is indexed based on the related and corresponding virtual address slot that produced the statistics data.Returning to the physical page approach of FIG. 9 , in order to ensure that the counters are not corrupted by the SDP software which may be resident in Secure RAM, the secure registers 2740 include a page activity register ACT. This page activity register ACT allows disabling of usage level monitoring for any page of the page pool as described next.TABLE 4: CODES SIGNIFYING ACTIVATION OF USAGE LEVEL MONITORING0Usage Level Monitoring not activated1Usage Level Monitoring activatedWhen ACT[N] is High (1), this page N is taken into account in updating Usage Level register STAT and Page Wiping Advisor register ADV.In FIG. 9 , the SSM Usage Level Builder 2720 handles the 7-bit page counters as follows.The process generates the truth value for the following expression, pertaining to a Read bus, analogous to an expression from hereinabove:(ARVALIDRW=1 and ARREADYRW=1 and ACLKENIRW=1 and ARPROTRW[2]=0).If the expression is not true, then for N=0 to 15, each respective read bit is zeroed in another temporary register PAGE_N_RD=0.If the expression is true, then for each page number N, those temporary register bits are respectively zeroed except for setting PAGE_N_RD=1, meaning the temporary register bit pertaining to the page N that was read. Determining which page N was read is determined by finding the N that produces a match PAGE_N_BASE_ADDR = ARADDRRW[31:12]. On Swap In, the high 16 bits are written to PA2VA of FIG. 4 and indexed by the next 4 page-specific bits. This indication PAGE_N_RD=1 is useful for adjusting the corresponding Page Access Counter in counters 2845 by indicating that the page N has this additional instance of being used.Notice that the process is repeated analogously to the process from hereinabove except that a Read bus is involved instead of a Write bus. "AR" instead of "AW" is the letter pair prefixed to additional variables involving Valid, Ready, and Protected.Next for each page N from 0 to top page, the process generates the truth value for the following expression, for an Instruction bus, analogous to an expression from hereinabove:(ARVALIDI=1 and ARREADYI=1 and ACLYENIRW=1 and ARPROTI[2]=0).If the expression is not true, then for N=0 to 15, in another temporary register each respective page bit PAGE_N_I is zeroed. (PAGE_N_I=0).If the expression is true, then for the accessed page number N, both the temporary register bit is set to one PAGE_N_I=1 and the TYPE register bit for the accessed page is set TYPE[N]=1, provided PAGE_N_BASE_ADDR = ARADDRI[31:12]. On Swap In, the high 16 bits are written to PA2VA of FIG. 4 and indexed by the next 4 page-specific bits. The temporary register bits PAGE_N_I for all the other fifteen pages are zeroed.In words, the SDP hardware 1040 monitors for the instance when not only the high 16 bits of PAGE_N_BASE_ADDR are equal to the high 16 bits of ARADDRI[31:16] on the Instruction bus, but also the next 4 page-specific bits of PAGE_N_BASE_ADDR are equal to the next 4 page-specific bits ARADDRI [15:12] signifying the page to which PAGE_N_BASE_ADDR pertains. A match indicates a Write to Page N, which makes Page N a Code page. When a match happens, the TYPE register bit TYPE[N] pertaining to page N is set to one (1) (Code Page). The TYPE register is not modified at any other bit position at this time since current Type indications for any other page should be remembered and not disturbed.The process just above is repeated analogously to the process from hereinabove except that PAGE_N_I and TYPE register are involved instead of WR[N], and "I" for Instruction bus instead of "RW" is the suffix in further additional variables involving Valid, Ready, and Protected.The activated pages are prevented from being corrupted by accesses to not-activated pages by virtue of a respective access-valid register bit PAGE_N_ACCESS_VALID for each page N. Specifically, for each respective page N, that access-valid register bit is generated as follows:PAGE_N_ACCESS_VALID = (PAGE_N_WR OR PAGE_N_RD OR PAGE_N_I) AND ACT [N] .Note that letter "N" represents a page index in each of the five register bits of the above equation.Next, if PAGE_N_ACCESS_VALID is valid for any page (determined by OR-ing the access-valid register bits for all pages), then the page counter for the page N for which access is valid is set to 127, and all other page counters are decremented by one or maintained zero if already zero. In this way, even a single isolated instance of a page access is sufficient to confer a HIGH Usage Level to the page for a while.Some other embodiments instead add a predetermined value to the page counter for page N. For example, the predetermined value can be equal to half the counter range or some other value. The page counter is structured to saturate at 127 and not roll over if the result of the addition exceeds the high end value (e.g., 127). In this way, more than a single isolated instance of a page access is applied to regain the HIGH Usage Level.The SSM Usage Level Builder 2720 of FIG. 9 encodes the page counters according to TABLE 3. The page counters for all pages are updated as described for each time an access occurs on one of the activated pages. Also, the statistics STAT value STAT for a respective page is updated for each time an access occurs on one of the activated pages.Fourth process and structure area. The fourth process and structure area computes from the first, second, and third process and structure area results to determine which page N to wipe out. The result of the fourth process and structure area is accessible by the secure demand pager SDP via the secure register ADV according to TABLE 5.TABLE 5: CODES SIGNIFYING RECOMMENDATION TO WIPE A PAGE0Page N must not be wiped out1Page N should be wiped out.When only one single bit ADV[N] is High, then page N has been identified to be the best choice for wiping. Note that the ADV register can have more than one bit high at a time as described next hereinbelow. Also, when no ADV bits are high, such as when every page has High Usage Level and low priority for wiping, then the SDP randomly or otherwise appropriately selects a page for wiping.When the various registers ACT, TYPE, WR, STAT, and ADV are reset, such as on power up, warm reset, or new Virtual Machine Context (VMC), all the bits in those five registers are suitably reset to zero. Those five registers are suitably provided as secure registers protected by SSM, and a Secure Supervisor program is used to access those registers ACT, TYPE, WR, STAT and ADV in Secure Mode.In FIG. 10 , the coding concatenates STAT[2N+1:2N] & TYPE[N] & WR[N] to create a Concatenation Case Table 2850 having row entries for each page N. A design code case statement involving this concatenation is also represented by the expression CASE STAT 2 N + 1 : 2 N & TYPE N & WR NA row entry in this example has four bits entered therein. In other embodiments, more or fewer bits with various selected meanings are suitably used in an analogous way. For example, if a Page has a row 4 entry "1001" in Concatenation Case Table 2850, it means that Page 4 is characterized by {"10"- MEDIUM usage, "0" - Data Page, "1" - Dirty Page}. For a second example, if a Page has a row 5 entry 0110 in Concatenation Case Table 2850, it means that Page 5 is characterized by {"01" LOW usage, "1" - Code Page, "0" - Clean Page}. The Concatenation Case Table 2850 is suitably regarded as a collective designation for the STAT, TYPE, and WR registers in some embodiments; it can be a separate physical table in other embodiments.The Page Access Counter(s) 2845 supplies an identification of an applicable one of a plurality of usage ranges called Usage Levels (VERY LOW, LOW, MEDIUM, HIGH) in which the usage from the Page Access Counter lies to form the Usage Level bits according to TABLE 3 for the STAT register in Concatenation Case Table 2850.In FIG. 10 , the Concatenation Case Table 2850 is then converted to a 9-bit field by table lookup from TABLE 6 or by conversion logic implementing TABLE 6 directly to supply each entry to a Priority Sorting Page 2860.TABLE 6 ENCODINGS FOR PRIORITY 9-BIT FIELD0010000000001CODE page, VERY LOW usage0110000000010CODE page, LOW usage0000000000100DATA READ page, VERY LOW usage0100000001000DATA READ page, LOW usage0001000010000DATA WRITE page, VERY LOW usage0101000100000DATA WRITE page, LOW usage1010001000000CODE page, MEDIUM usage1000010000000DATA READ page, MEDIUM usage1001100000000DATA WRITE page, MEDIUM usage11xx000000000Page should not be wiped outxx11N/AN/A, Not used where code page is read onlyFor example, a Code page with Very Low usage has the highest priority for wiping and all other items have decreasing priority in order of their listing. A Data Write page is a Dirty page, which is lower in priority than other pages, other things equal. This is because the Dirty page, if selected for wiping, is Swapped Out, and SDP Swap Out involves overhead of Encryption and Hash for security which can be sometimes avoided by setting the priority lower.The TABLE 6 conversion assigns a higher page priority for wiping to a page that is unmodified (Clean) than to a page that has been written (Dirty), other things equal. A higher page priority for wiping is assigned to a code page than a data page, other things equal. A higher page priority for wiping is assigned to a lower usage page than a higher Usage Level page (see STAT register TABLE 3), other things equal.A 9-bit page priority for wiping is assigned in operations according to TABLE 6. The TABLE 6 priorities represent that, for at least one Usage Level of TABLE 3, a code page in that Usage Level has a higher priority than an unmodified data page in the next lower Usage Level. For example, a CODE page with LOW usage has a higher priority than a DATA READ page with VERY LOW usage. In the example of this paragraph, this prioritization is established mainly because there is an uncertainty on a DATA READ page that can become WRITE after a writeback drain and cache flush. By contrast, pages identified CODE are sure not to become a DATA WRITE page in this example wherein an assumption of no-modification to code pages is established.Similarly, a page priority for wiping is assigned wherein for at least one Usage Level, an unmodified data page in that Usage Level has a higher priority than a written data page in the next lower Usage Level. For example a DATA READ page with LOW usage has a higher priority than a DATA WRITE page with VERY LOW usage.Different embodiments suitably provide other types of priority codings or representations, such as binary, binary-coded-decimal, etc. than the 9-bit singleton-one position-coded priority (or its complement) shown in the example of TABLE 6, even if the listed hierarchy of tabulated MEANINGS remains the same. In still other contemplated embodiments, the hierarchy of MEANING is revised to establish other practically useful priority orderings and more or fewer priority codes depending on experience with different numbers of Usage Levels, Types, Dirty/Clean conditions, and fewer or additional such variables.In FIGS. 9 and 10 , the ADV register has a respective bit set high corresponding to each of the pages that have hit the highest priority (000000001) in the sorting scheme of TABLE 6. Thus, the ADV register may have more than one bit set high for pages that all have the same priority level (i.e, all zero or all four). The SDP mechanism consequently chooses the page that is the most suitable for its internal processing without any added distinction required. In such case SDP suitably is arranged to randomly select one Page, for instance, or perform a predetermined selection (e.g., choose highest 4-bit physical page number N) implemented in an inexpensive circuit structure.In FIG. 9 a Page Wiping Advisor 2730 is suitably provided as represented by hardware and operations on a Priority Sorting Page 2860 of FIG. 10 as follows:For each page N, enter a respective 9-bit value of PRIORITY_SORTING_PAGE_N[8:0] as follows.If page activity ACT[N] is zero, then set PRIORITY_SORTING_PAGE_N[8:0] to zero for that page N.If page activity ACT[N] is one, then set PRIORITY_SORTING_PAGE_N[8:0] to the nine-bit value from TABLE 6 representing the page Type TYPE[N], Dirty status WR[N], and its Usage Level STAT [N] .Next, the process considers each of the bit-columns of Priority Sorting Page 2860 in FIG. 10 . For example, nine such bit-columns are indexed from column zero (0) high wiping priority on right to column 8 low wiping priority on left. In other words, column zero (0) represents "CODE page, VERY LOW usage, the highest priority for wiping in TABLE 6 if a one (1) entry is in column zero (0). Columns 1, 2, 3, ...8 in Priority Sorting Page 2860 respectively have the successively lower priority Meanings tabulated vertically in TABLE 6 down to low priority 8 "DATA WRITE page, MEDIUM usage. A singleton one (1) is entered in page-specific rows of Priority Sorting Page 2860 to represent the priority assigned to each active physical page that has less than HIGH Usage Level in Secure RAM 1034 governed by SDP.Priority Result 2870 is loaded with OR-bits in the following manner. The bits entered in a given column of Priority Sorting Page 2860 are fed to an OR-gate or OR-process. These bits in a given column of Priority Sorting Page 2860 correspond to the pages from 0 to total number N. Priority Sorting Page 2860 is a N-by-9 array or data structure in this example. The OR operation is performed on each of the nine columns of Priority Sorting Page 2860 to supply nine (9) bits to Priority Result 2870 of FIG. 10 as represented by design pseudocode here:PRIORITY_RESULT[0] = PRIORITY_SORTING_PAGE_0[0] OR... OR... OR PRIORITY_SORTING_PAGE_N[0]PRIORITY_RESULT[8] = PRIORITY_SORTING_PAGE_0[8] OR... OR... OR PRIORITY_SORTING_PAGE_N[8]Next, for each page N, the process successively looks right-to-left by an IF..ELSE IF...ELSE IF successively-conditional sifting structure and procedure (in CAPS hereinbelow) for the highest priority value (right-most one) for which PRIORITY_RESULT[] 2870 is a one. This successively-conditional procedure moves column-wise in Priority Result 2870 from right to left. As soon as the first "one" (1) is found, which is the right-most one, the process pseudocode hereinbelow loads register ADV and falls through to completion. This right-most one in Priority Result 2870 one identifies corresponding column 2875 of Priority Sorting Page 2860. Column 2875 is significant for determining which page to wipe. The process loads the entire column 2875 of Priority Sorting Page 2860 into the Page Wiping Advice register ADV 2880 to establish the wiping advice bit entries in register ADV.The Page Wiping Advice register ADV is thus loaded by design pseuodocode hereinbelow by concatenation of PRIORITY_SORTING_PAGE 2870 bits (pertaining to all the physical pages) in column 2875 based on that highest priority right-most one position detected in Priority Result 2870. If the successive procedure fails to find a PRIORITY_RESULT bit active (1) for any of the priorities from 0 to 8, then operations load register ADV with all zeroes, and also zero a default bit named OTHERS.IF PRIORITY_RESULT[0] = 1 THEN ADV=PRIORITY_SORTING_PAGE_0[0] &.... &... & PRIORITY_SORTING_PAGE_N[0] ELSE IF PRIORITY_RESULT[1] = 1 THEN ADV=PRIORITY_SORTING_PAGE_0[1] &... &... & PRIORITY_SORTING_PAGE_N[1] ELSE IF PRIORITY_RESULT[2] = 1 THEN ADV=PRIORITY_SORTING_PAGE_0[2] &... &... & PRIORITY_SORTING_PAGE_N[2] ... ELSE IF PRIORITY_RESULT[8] = 1 THEN ADV=PRIORITY_SORTING_PAGE_0 [8] &... &... & PRIORITY_SORTING_PAGE_N[8] ELSE ADV <= 0; OTHERS <= 0; ENDIF;In TABLE 6 and FIG. 10 , the nine bit Priority field has nine bits for singleton one positions in this example because conceptually multiplying three times three is nine. The first conceptual "three" pertains to the number of types T of pages for TYPE[N] concatenated with WR[N]. In this example, the types of pages are 1) 10-Code Page, 2) 00-Data Read Page, and 3) 01-Data Write Page. Data Read does not necessarily mean a read-only limitation, just a page that has not been written (Clean) while in Secure RAM. The second "three" pertains to three levels of Statistics STAT other than HIGH. Those three lower STAT levels are 1-Very Low, 2-Low, and 3-Medium.In general, "L-1" (L minus one) is the number of Usage Levels represented in the Statistics register STAT less one for the highest Usage Level. High usage pages get all-zeroes. Thus, the number of bits in each row of the Priority Sorting Page 2860 is the product of multiplication (L-1)[(2^(T+W))-1], where T is the number of bits in the Type register TYPE, and W is the number of bits in the Write or Dirty register WR. "^" means "raised-to-the-power." In this example wherein L is 4, T is one, and W is one, the number of bits in each row of Priority Sorting Page 2860 is nine.In FIG. 10 , each row of the Priority Sorting Page 2860 has nine bits among which is a singleton one, except all-zeroes for HIGH Usage Level pages and inactivated pages (ACT[N]=0). The singleton one can occupy any one of the nine bit positions, depending on the result from the Concatenation Case Table 2850. Nine zeroes (all-zeroes) in a row of the Priority Sorting Page 2860 means that a page is present (valid) and should not be wiped because usage is HIGH, or that a particular page N is not present (not valid, ACT[N]=0) in Secure RAM page space.STEP TWO processes the Priority Sorting Page 2860 by doing a Boolean-OR on the bits in each column of Priority Sorting Page 2860. All the bits are ORed from the first column and the result is put in a corresponding first cell of a Priority Result vector 2870. Similarly, all the bits are ORed from the second column of Priority Sorting Page 2860 and the result in put in the second cell of Priority Result vector 2870, and so one until all nine cells of Priority Result vector 2870 are determined.STEP THREE detects the position R of the cell having the right-most one in the Priority Result vector 2870.STEP FOUR then outputs a column 2875 of the Priority Sorting Page 2860 that has the same position R as the rightmost one cell position detected in STEP THREE. Column 2875 of Priority Sorting Page 2860 is muxed out by a Mux 2878 and supplied to the Page Wiping Advice register ADV 2880. The bits in Priority Result 2870 constitute selector controls for the Mux 2878. The Mux 2878 has Nx9 inputs for the respective nine columns of Priority Sorting Page 2860. The Mux 2878 has an Nx1 output to couple the selected column to register ADV. In an alternative embodiment, the selected column (e.g., 2875) effectively acts as the Page Wiping Advice and is muxed directly to Page Selection Logic 2885.In FIG. 10 , Page Selection logic 2885 has an output that actually wipes a page from Secure RAM and/or loads a page to Secure RAM. Page Selection logic 2885 loads pages to Secure RAM as needed by the application until Secure RAM is full. Regardless of priority of existing pages in Secure RAM, in this example, if Secure RAM is not yet full, no existing pages are wiped since space remains for new pages to be loaded. When Secure RAM becomes full, then the Page Wiping Advice register 2880 contents are used. Page Wiping Advice register feeds Page Selection logic 2885. Each "one" in the Page Wiping Advice ADV 2880 signifies a page that can be wiped from a set of Pages 2890 currently residing in Secure RAM. Typically, there is just a single one (1) in the Page Wiping Advice ADV 2880. When Secure RAM is already full, then a single corresponding page is wiped from Pages 2890 in response to the "one" in the Page Wiping Advice ADV 2880.Consider each interval in which the execution of the application runs during each interval by making read and write accesses to pages in Secure RAM without needing to load any new page from external memory. During each such interval, the Page Access Counter is continually updated with running counts of accesses respective to each Page[N]. Any instance of a write access to a page is used to update WR[N].After Page Activity 2890 has been updated with each given instance of a page N being either loaded or wiped according to the Page Wiping Advice ADV 2880, then the STAT, TYPE, and WR registers (collectively, a Data register 2840) are updated. Page Access Counters 2845 is set to 127 in the particular counter corresponding to the new page. Register STAT is updated for each such new page from the Page Access Counters 2845 according to TABLE 3. Register TYPE is updated for the new page as Code or Data. Register WR is initially set to the Clean value respective to the new page. In some embodiments, the Concatenation Case Table 2850 is a separate structure and correspondingly updated, and in other embodiments, the registers STAT, TYPE and WR collectively constitute the Concatenation Case Table 2850 itself.The Priority Sorting Page 2860 is also correspondingly updated, and the process of generating Priority Result 2870 and Page Wiping Advice ADV 2880 is repeated in this way continually to keep the Page Selection Logic 2885 fed with wiping advice from register ADV. In that way, upon a Load Page active input due to a page fault occurrence, Page Selection Logic 2885 readily identifies which page to wipe. Page Selection Logic 2885 keeps pages wiped currently, such as by updating the Page Active register 2890 with a zero or making an entry in an additional bit field in the Page Active register 2890 constituting a Page Wipe control register.Then the SDP Swap Manager responds to the updated Page Wipe information. If the WR [N] bit is Dirty for the particular page N that is wiped, then a Swap Out operation is performed, otherwise if Clean, then no Swap Out operation. Then a Swap In operation gets the new page and loads it into the particular physical page slot N overwriting the page N that was wiped. Then Page Activity register 2890 is updated to indicate that the new page is active in page slot N.Initialization is performed at the beginning of the process by clearing all of the data structures 2840, 2850, 2860, 2870, ADV 2880, 2890. Also, when the Secure RAM is being loaded in early phases of execution, as-yet unused page spaces in Secure RAM are available for incoming new pages being loaded according to FIG. 10 . Prioritization from Page Wiping Advice is suitably ignored by Page Selection Logic 2885 until all the physical page slots of Secure RAM for data and code pages governed by SDP are full of physical pages.When Secure RAM is full and a new page needs to be loaded, the secure demand paging mechanism chooses an existing page in Secure RAM for wiping that is the most suitable for its internal processing without any added distinction required. When Page Wiping Advice ADV register 2880 has two or more bits HIGH, the SDP mechanism of Page Selection Logic 2885 and/or SDP software in different embodiments takes the first page it sees set or takes a page randomly from among the pages with an ADV bit set to one. If all ADV bits are set to zero, the SDP mechanism of Page Selection Logic 2885 and/or SDP software in different embodiments takes the first page it sees set or takes a page randomly from among all the pages for which wiping is permitted. The SDP mechanism also benefits from information indicating that several pages can be replaced and can thus replace more than one page even if only one was requested.Alternative embodiments use the following processes for multiple ones in ADV register 2880: 1) take first one, 2) take randomly, 3) resolve the tie by taking page with lowest page access counter 2845 value, 4) replace more than one page, 5) reserve one slot for DATA pages and a second slot for code pages, and 6) reserve respective slots for respective applications in a multi-threaded SDP. If the usage level is HIGH for all pages in Secure RAM it is also possible for all-zeroes to appear in ADV register 2880. A similar set of the just-listed process alternatives are used in various embodiments when all-zeroes appear in ADV register 2880.To replace more than one page even if only one was requested, a program flow anticipation process suitably determines what page(s) to swap in since the request only identifies one such page. In this program flow anticipation process, when the SDP Page Selection Logic 2885 reads from SSM ADV register that 3 pages can wipe out, the SDP Page Selection Logic 2885 and/or SDP software replaces the first one page with the page requested and the two remaining pages with the two adjacent pages of the page requested. Other rules are suitably instantiated in the practice of various embodiments by modelizing the software behavior so as to leverage information identifying which data or code page is linked with another data or code page.In embodiments that swap out more than one page at a time when appropriate, CPU bandwidth is advantageously saved by avoidance of a future page fault when a future page fault is otherwise likely, such as in the case of a secure process context switch.Page access counters 2845 is an example of a page access table arrangement that has a page-specific entries. Each page-specific entry is set to an initial value by entry of a new page corresponding to that entry in the internal memory. That initial value can be 0 or top-of-range (e.g. 127) or some other value chosen for the purpose. The page-specific entry is reset to a value substantially approximating the initial value in response to a memory access to that page. In some embodiments, the entry is reset to the initial value itself, but some variation is permissible, suitable, and useful for performance. Further, the page-specific entry is changed in value by some amount in response to a memory access to a page other than the page corresponding to that entry. The change may be by a positive or negative amount, or by incrementing or decrementing, or by some random number within a range, and other variations for performance in various applicationsThe concatenation case table 2850 also has various forms in different embodiments. In one type of embodiment, the concatenation case table is suitably a compact array of storage elements having a layout much as shown in FIG. 10 . In another embodiment, storage elements for Usage Level STAT, page Type TYPE, and page modified WR are scattered physically. In another embodiment, the page-specific Usage Level range for register STAT is simply derived by a coupling or a few logic gates coupled to the Page Access Counter to determine from high order bits what range or tier in which a given value of page access statistic lies. Thus, a separate storage element for the usage level may be absent, even though a given usage level is formed from the statistic.The conversion circuit 2855 responds to the concatenation case table to generate a page priority code for each page. In some embodiments, conversion circuit 2855 generates a page priority code field having a singleton bit value accompanied by complement bit values, the singleton bit value having a position across the page priority code field representing page priority. Other priority codes are used in other embodiments.Some embodiments include a priority sorting table 2875 as a physical structure accessible by the priority sorting circuit and holding the page priority code for each page generated by the conversion circuit 2855. The priority sorting circuit searches for at least one page priority code in the priority sorting table having a position of the singleton bit value having a position across the page priority field representing a highest page priority, and thereby identifying a page having that page priority code. When more than one page has the highest page priority, one of them is selected by some predetermined criterion such as first, last, highest or lowest, or randomly selected as the page to wipe from the pages having the highest page priority.A right-most ones detector is but one example of a priority sorting circuit for identifying at least one page having a highest page priority. Depending on arrangement an extreme-ones detector such as either a right-most ones detector or a leftmost ones detector are suitable and yet other alternative approaches are used depending on the manner of representing the priority from the conversion circuit 2855.In FIG. 11 another embodiment has a set of Page Access Counters 2845, and a Concatenation Case Table 2850 provided with STAT, TYPE, WR bits for each physical page in Secure RAM. A Page Identification Counter 2910 cycles through physical page identifying bits (e.g., 4 bits from 0000 through 1111 binary). Page Identification Counter 2910 provides these bits as selector controls to a 16:1 Mux 2920. Mux 2920 supplies Concatenation Case codes row-by-row, such as 10 0 1 from row for page 0 in the top row of Concatenation Case Table 2850.A Priority Conversion circuit 2955 responds to each Concatenation Case code supplied by Mux 2920 and converts to a Priority Code such as the nine-bit codes of TABLE 6, or four-bit binary codes representing numbers from zero (0) to (9) decimal or otherwise as described herein.Further, a Priority Maximum Detector 2970 is fed directly by the Priority Conversion circuit 2955. The Priority Maximum Detector 2970 finds the maximum Priority Code among the Priority codes (e.g. 16 of them) fed to Detector 2970 on the fly. Detector 2970 is any appropriate maximum detector circuit. One example of a Detector 2970 is an arithmetic subtractor circuit fed with successive Priority Codes, and subtractor output conditionally updating a temporary holding register when a new Priority Code arrives that is greater than any previous Priority Code fed to it in the cycle. Concurrently and conditionally, an associated temporary holding register is updated with the 4-bit page identification (Page ID) bits supplied by Page Identification Counter when each greatest new Priority Code arrives. The temporary holding register for Priority Code is fed back to the subtractor for comparison with succeeding incoming Priority codes. Comparing FIG. 10 with FIG. 11 , note that the right-most one detection in FIG. 10 acts as a type of maximum detector of different structure.When Page Identification Counter 2910 rolls over from 1111 to 0000 to begin a new cycle, the associated temporary holding register for Page ID is clocked to an output PAGE_ID_TO_WIPE which remains valid during the entire new cycle until updated.Also, occurrence of roll-over to 0000 by the Page Identification Counter 2910 is fed to circuitry in Priority Maximum Detector 2970 to reset the temporary register for Priority Code so that the maximum is re-calculated on the new cycle. Thus, Priority Maximum Detector 2970 is cycled and reset by Page Identification Counter 2910, and Detector 2970 has a storage element to store the latest page identification having the highest page priority as conversion by Priority Conversion circuit 2955 proceeds through the various pages. The output PAGE_ID_TO_WIPE is suitably fed directly to the ACT register or to analogous control registers.Note that Priority Maximum Detector 2970 automatically operates, in the particular subtractor example, to pick the first or last of multiple pages having the maximum Priority Code for wiping, if that circumstance occurs. If more than one page has the same highest priority value among all the pages, the Priority Maximum Detector 2970 simply stores the first page or last identification of a page tied with any other page having that highest priority value, depending on how the conditional output of the subtractor is arranged for greater-than-zero (picks first tied page), or greater-than-or-equal-to-zero (picks last tied page). Detector 2970 then returns that page identification as the page wiping advice PAGE_ID TO WIPE.In FIG. 11 , supporting Page Selection Logic analogous to 2885 of FIG. 10 is provided as appropriate. Various types of page selection logic 2885 are suitably fed by the Priority Sorting Page 2860 for selecting a page in the memory to wipe. Page selection logic suitably has an input coupled to the activity register to override the page selection logic when there is an empty page slot in the memory. Another embodiment couples the page activity register 2890 to the conversion circuit 2855, and the conversion circuit 2855 assigns higher page priority for wiping to an empty page than an occupied page.In FIGS. 12A and 12B , operational flow embodiment 3000 for aspects of FIGS. 9 , 10 , 11 and 13 commences operations with a BEGIN 3005 in FIG. 12A and proceeds to a decision step 3010 to determine if a New Page has been Swapped In to page slot [N]. By "New Page" is meant a page that has just previously been absent from Secure RAM 1034 whether or not that page was present in Secure RAM at some time in the past or has never been in Secure RAM at any time in the past. If so, then a step 3015 updates TYPE[N] for Code or Data page Type, and resets counter N to its initialization value (e.g., 127) in Page Access Counters 2845, and goes to a step 3020.In decision step 3010, if no Swap In of a New Page is found, then operations go to a decision step 3020. Decision step 3020 determines whether a Read or Write access has just been made to a page in Secure RAM 1034. If so, then operations proceed to a decision step 3025 to determine whether for each given page N that the access was made to that page N. If the access was made to the given page N, then a step 3030 resets counter CTR[N] to (or increments approximately to) the initialization value (e.g., 127), and a succeeding step 3035 updates the register WR [N] to a Dirty state for page N in case of a Write access.In step 3025, if the access was made to a page other than each given page N, then a step 3040 adjusts the counter CTR[N] such as by decrementing CTR[N]. Thus, page N has a corresponding counter N adjustment indicative of access, which is a sign of usage, and all the other pages have their respective counter values adjusted to indicate the non-access at this moment. In this way, counter statistics are progressively developed in Page Access Counters 2845.After either of steps 3035 and 3040, a step 3050 converts and stores the statistics in all the counters CTR[0], CTR[1], ...CTR[N] into corresponding updated Usage Levels in register STAT in register 2740 of FIG. 9 .After either of steps 3050 and No from page access decision step 3020, operations proceed to a decision step 3060 to determine whether a Page Fault has occurred by an attempted access to a page that is absent from Secure RAM 1034. If not, operations loop back and ordinarily reach decision step 3010 to proceed with a new updating cycle.Continuing the flow description in FIG. 12B , if a Page Fault has occurred (Yes) in step 3060 of FIG. 12A , then a decision step 3065 determines whether Secure RAM 1034 has any empty page slot into which a new page could be put. If no empty slot is detected in decision step 3065, then operations in a step 3070 prioritize for wiping the pages currently resident in Secure RAM 1034, find the page(s) having greatest priority for wiping, and load the Page Wiping Advice register ADV of FIG. 9 .Then a step 3075 selects a particular page N to wipe based on wiping advice bit in the register ADV, or by selection from two or more wiping advice bits that might have been set in register ADV, or selection from all the pages if no wiping advice bit was set in register ADV.Next, a decision step 3080 determines if the page N selected in step 3075 is a modified page (WR[N]=1) If Yes in step 3080, then a step 3085 performs a cryptographic operation (such as encryption, hashing, or both) and Swaps Out the page N that is wiped. If page N is found to be a not-modified page in step 3080 or step 3085 has been reached and completed, then operations Swap In a New Page in a step 3090.After the Swap In of step 3090 of FIG. 12B , or if there was no Page Fault detected in step 3060 of FIG. 12A , then operations go to a decision step 3095 in FIG. 12A to determine whether there is a circuit reset or other termination of operations. If not, then operations go back to step 3010 to do a new cycle of operation. If Yes, in step 3095, then operations go to RETURN 3098 and are thus complete.STATIC AND DYNAMIC PHYSICAL PAGE ALLOCATIONIn various embodiments, and in the same embodiment for different software applications, an optimum usage of pages can be established or selectively established by having an Allocation Ratio of Code pages to Data pages be within a predetermined range (statically established).In still other embodiments the Allocation Ratio of Code pages is dynamically learned. The Allocation Ratio of Code pages to Data pages for allocating Secure RAM physical page space is suitably learned or determined given a limited number N of total secure memory pages. Determining the Allocation Ratio statically is either obviated or augmented in some embodiments that dynamically learn the ratio, in effect. Hybrid embodiments that determine the Allocation Ratio statically for some applications or purposes and dynamically for other applications or purposes are also contemplated.The SDP mechanism generates the page selection from the behavior of the application software through an internally activated learning process of SDP that counts actual activity of data pages and code pages to thereby generate statistics to stabilize on the appropriate allocation ratio. In terms of memory organization that re-groups or allocates DATA or CODE pages, the ratio is learned or implicitly results from the activity by SDP register ADV 2880. For example, suppose the allocation ratio of Code page slots to Data page slots in Secure RAM is initialized at unity (1 Code page per 1 Data page). Further suppose that the particular application is swapping in Code pages at the rate of 2:1 (two Code pages swapped in for every one Data page).Then the SDP mechanism in one embodiment increments the number of page slots in Secure RAM allocated to Code pages by one, and decrements the number of page slots allocated to Data pages by one. Limits are imposed so that there is always a minimum number of at least one or two Code pages and at least one or two Data pages to set limits on the range of the allocation and always include both Code or Data pages in the SDP process.Suppose the particular application continues to be swapping the Code pages at a higher Swapping Ratio to Data pages than the Allocation Ratio of Code Slots divided by Data Slots established by SDP for Secure RAM. Then the Allocation Ratio would be continually altered to increase the allocation ratio by increasing page slots allocated for Code pages and decrementing the number of page slots allocated for Data pages. At some point, the process would settle at a point wherein the Swapping Ratio equals the Allocation Ratio. In this embodiment the pseudocode might provide after, S number of swaps updating Code and Data swap statistics for instance:IF SWAP_RATIO - ALLOCATION_RATIO > EPSILON THEN CODE_SLOTS <= CODE_SLOTS + 1 DATA_SLOTS <= DATA_SLOTS - 1 ELSE IF ALLOCATION_RATIO - SWAP_RATIO > EPSILON THEN CODE_SLOTS <= CODE_SLOTS - 1 DATA_SLOTS <= DATA_SLOTS + 1;Feedback in the above process drives the Allocation Ratio to be approximately equal to the Swap Ratio. The Allocation Ratio is changed by changing the number of CODE_SLOTS and DATA_SLOTS, which always sum to the available number of physical page slots in Secure RAM. Then the Swap Ratio changes in a complex way, partly in response to the change in Allocation Ratio and partly in response to the structure of the area of current execution in the application program. Even though the behavior and dependencies are complex, the dynamic learning feedback process accommodates this complexity. The value EPSILON is set at a predetermined amount such as 0.2 to reduce hunting near a settling point Swap Ratio equal to Allocation Ratio by the learning feedback loop. In actual execution of an application program, a continual adaptation by the dynamic learning feedback process is provided whether a settling point exists or not. Thus, the SDP register ADV 2880, and the process that drives it, not only chooses page locations to wipe but also dynamically evolves a ratio of Code pages to Data pages in Secure RAM.Limits are placed on the increments and decrements so that at least one slot for a Code page and at least one slot for a Data Page are allocated in Secure RAM. In this way, Swap is always possible for either type of page.Pre-existing applications software is suitably used as is, or prepared for use, with the SDP governed Secure RAM space. For instance, software-generating methods used to prepare the application program suitably size-control the program loops to reduce the number of repetitions of loops that cycle through more virtual pages than are likely to be allocated in Secure RAM space for the loops. Big multi-page loops with embedded subroutine calls are scrutinized for impact on Swap overhead and thrashing given particular Allocation Ratio and allocable number of pages in Secure RAM. Various SDP embodiments can perform to the best extent possible given the actual application programs that they service, and some application programs as noted permit SDP efficiency to display itself to an even fuller extent.Minimizing hunting in the dynamic learning process is here explained using some scenarios of operation. In a first scenario, suppose execution of an application program is page-linear in the sense that execution occurs in a virtual page, then proceeds to another virtual page and executes some more there, and then proceeds similarly to a third, and subsequent pages until execution completes. With a page-linear application, a single Code page could suffice since each new Code page needs to be swapped in once to Secure RAM because the application is a secure application and is to be executed from Secure RAM. Since execution does not return to a previously executed page, there is no need to Swap In any Code page twice. There is no thrashing, and there is no need for even a second Code page slot in Secure RAM in this best-case example.In a second scenario, suppose execution of an application program is page-cyclic in the sense that somewhere in the application execution occurs in one virtual page, then proceeds directly or by intervening page(s) to another virtual page and executes some more there, and then loops back to the first-mentioned one virtual page. In this case, Swapping In the first-mentioned page could have been avoided if there were at least one additional Code slot as a physical page slot in Secure RAM. Where loops cycle many times between pages, repeated Swapping is avoided by providing enough physical Code page slots in Secure RAM so that the repeated Swapping is unnecessary since the needed pages are still resident in the Secure RAM.The subject of hunting enters the picture as follows. Suppose allocating a given number M of Code page slots in Secure RAM produces very little thrashing. Then suppose decrementing the allocation from M to just one less number of pages M-1 produces a lot of thrashing because the application has a loop that cycles through M number of pages. There may be a stair-step non-linearity, so to speak, in the efficiency. Accordingly, some dynamic learning embodiments herein keep the next two previous statistics on Swap Ratio prior to a given decrement operation. If the previous statistics indicate a large gap between Swap Ratio in the last two previous statistics, the decrement operation is in some embodiments omitted because the next re-allocation might start a cycle of hunting and increase the amount of Swapping and thrashing. Because a settling point might not in fact exist due to the dynamics of an application program, other dynamic learning embodiments that might not have this extra precaution are regarded as quite useful too.A second dynamic learning embodiment recognizes that Data pages include time-consuming Dirty page Swap Out as well as Data page Swap In, and Code pages in this example are always clean. Accordingly, the Swap Ratio in this embodiment should settle at a point that takes a Dirty-Swap-Out factor into account, such as by allocating some more space to data pages than otherwise would happen by equalizing the Allocation Ratio to the Swap Ratio. This second embodiment keeps statistics on number of Code pages, number of Data dirty pages, number of Data not-dirty (clean) pages. The time required for SDP to service these pages is either known by pre-testing or measured in clock cycles on the fly.For this second embodiment, define symbols as follows:C Number of Code page wipe plus new Code page Swap Ins per secondTC Code page wipe plus new Code page Swap In time (milliseconds)Dn Number of Data not-dirty page wipe with new Data page Swap Ins per secondTDn Data not-dirty page wipe plus Swap In time (milliseconds)Dd Number of Data dirty page Swap Out with new Data page Swap In per secondTDd Data dirty page Swap Out plus Swap In time (milliseconds)Then the time-based ratio of Code page time to Data page time is written down and put in to direct the process ahead of the testing step on the Swap Ratio minus Allocation Ratio. A pseudocode example for this second embodiment is provided next below:SWAP RATIO <= C TC / (Dn TDn + DdTDd); DELTA <= 1; ADJUST <=1; ALLOCATION_RATIO <= CODE_SLOTS / DATA_SLOTS; IF SWAP_RATIO * DATA_SLOTS - CODE_SLOTS > DELTA THEN CODE_SLOTS <= CODE_SLOTS + ADJUST DATA_SLOTS <= DATA_SLOTS - ADJUST; ELSEIF CODE_SLOTS - SWAP_RATIO * DATA_SLOTS > EPSILON THEN CODE_SLOTS <= CODE_SLOTS - ADJUST DATA_SLOTS <= DATA_SLOTS + ADJUST;2In words, if the time it takes for SDP to service Data pages on average is much higher than the time SDP takes to service Code pages, then the redefined Swap Ratio falls, diminishes and decreases, compared to a ratio C/D of Code to Data page rates in the first embodiment without the relative computational complexity of SDP servicing different types of pages taken into account. DELTA is a threshold of adjustment (e.g., unity or some other number of page slots). EPSILON in the first embodiment might change for a criterion based on a difference between Allocation Ratio and Swap Ratio values. The second embodiment, in effect, multiplies that difference by the number of Data Slots and compares to EPSILON, which is less likely to change over the range of allocation. In other words, a number EPSILON having value of page slot threshold (e.g., one (1)) is compared with the difference between the number of Code Slots allocated and the product of the Swap Ratio times the number of Data Slots allocated.In both the above dynamic learning pseudocode examples, the Allocation Ratio is effectively made to rise by the IF-THEN first part of the conditional pseudocode, and the Allocation Ratio is made to fall by the ELSEIF-THEN second part of the conditional pseudocode. The amount of adjustment ADJUST is given as plus-one (+1) page slot or minus-one (-1) page slot, and empirical testing can show usefulness of other alternative increment values as well.Initialization of the number of CODE_SLOTS and number of DATA_SLOTS is suitably predetermined and loaded in Flash memory as CODE SLOTS START AND DATA_SLOTS_START values. The Initialization is then adjusted by SDP software based on actual operation and stored on an application-specific basis for use as the application is subsequent re-started in many instances of actual use in a given handset or other system.A multi-threaded embodiment reserves respective slots for respective applications in a multi-threaded SDP. When one of the applications in the multi-threaded embodiment is added, the slot assignments are changed as follows. From a software point of view, if the SDP multi-threads several applications, the application not running will fatally be evicted after a while. Thus the slot assignment would consist on deactivating the pages of the application not running in order to keep them out of the sorting machine or process. This still keeps the deactivated pages statistics frozen.Also, consider initial Allocation Ratio and Swap Ratio for a dynamic-learning multi-threaded embodiment wherein a context switch in the middle of execution of a first application (Virtual Machine Context VMC1) may switch execution to the beginning or middle of a second application (Virtual Machine Context VMC2). There, the current Swap Ratio and Allocation Ratio for the first application at the time of context switch is stored for use when the first application is then Swapped back in to resume execution later. Upon context switch to the second application, analogous earlier Swap Ratio and Allocation Ratio information is retrieved so that the second application efficiently executes by benefiting from its own previous experience.In FIG. 13 , another dynamic learning embodiment 3100 responds to a control signal CODE to selectively perform prioritization and wiping advice for Code pages or selectively perform prioritization and wiping advice for Data pages. Either of the embodiments of FIG. 10 and FIG. 11 are suitably arranged for dynamic learning. FIG. 13 shows some rearrangements based on FIG. 10 . FIG. 11 is suitably rearranged analogously.In FIG. 13 , Page Access Counters 3145, Concatenation Case Table 3150, Conversion Table Lookup 3155 and Priority Sorting Page 3160 are illustratively analogous to the correspondingly-named structures 2845, 2850, 2855 and 2860 of FIG. 10 .Priority Result 3170 is loaded with OR-bits in the following manner. The bits entered in a given column of Priority Sorting Page 3160 are fed to an OR-gate or OR-process. These bits in a given column of Priority Sorting Page 3160 correspond to the pages from 0 to total number N. Priority Sorting Page 3160 is a N-by-9 array or data structure in this example. The OR operation is performed on each of the nine columns of Priority Sorting Page 3160 to supply nine (9) bits to Priority Result 3170 of FIG. 10 as represented by design pseudocode next below.In this embodiment, pseudocode defines structure and process that selectively responds to the CODE signal and the TYPE information. For instance, suppose that control signal CODE is active (e.g, high or one), meaning that only Code pages in the page slots allocated to Code pages are allowed to be prioritized and used to generate wiping advice pertaining to a Code page and not a Data page. In that case, CODE being high agrees with the TYPE[N] of each page N that is actually a Code page by TYPE[N] being high (one).A set of XNOR (Exclusive-NOR) gates equal in number to the number of pages (e.g., 16) are collectively designated XNOR 3183. (An XNOR gate supplies an output high when its two inputs are both high or both low; and the output is otherwise low.) When CODE and TYPE[N] are both high, each particular XNOR gate in XNOR 3183 in such case returns an active output (high, one). The XNOR high output qualifies an AND gate that passes through the state of PRIORITY_SORTING_PAGE_N [0] to PRIORITY_RESULT [0] . The just-described process is similarly performed for each column of Priority Sorting Page 2860 to load each corresponding bit of Priority Result 2870.PRIORITY_RESULT[0] = (PRIORITY_SORTING_PAGE_0[0] AND (TYPE[0] XNOR CODE)) OR ...OR ... OR (PRIORITY_SORTING_PAGE_N[0] AND (TYPE[N] XNOR CODE)) PRIORITY_RESULT[8] = (PRIORITY_SORTING_PAGE_0 [8] AND (TYPE[0] XNOR CODE)) OR ...OR ... OR (PRIORITY_SORTING_PAGE_N[8] AND (TYPE[N] XNOR CODE))Next, for each page N, the process successively looks right-to-left by an IF..ELSE IF...ELSE IF successively-conditional sifting structure and procedure (in CAPS hereinbelow) for the highest priority value (right-most one) for which PRIORITY_RESULT[] 3170 is a one. Because the whole process of loading Priority Result 3170 is conditioned TYPE[N] XNOR CODE, the subsequent right-most ones detection in Priority Result 3170 makes this determination only for the Code pages in Secure RAM.This successively-conditional procedure moves column-wise in Priority Result 3170 from right to left. As soon as the first "one" (1) is found, which is the right-most one, the process pseudocode hereinbelow loads register ADV and falls through to completion. This right-most one in Priority Result 3170 one identifies corresponding column 3175 of Priority Sorting Page 3160. Column 3175 is significant for determining which page to wipe. The process loads the Code-page-related entries in column 3175 of Priority Sorting Page 3160 via Mux 3178 into the Page Wiping Advice register ADV 3180 to establish the wiping advice bit entries in register ADV. The Code-page-related entries fed to register ADV are qualified by action of XNOR 3183.The Page Wiping Advice register ADV is thus loaded by design pseuodocode hereinbelow by concatenation of PRIORITY_SORTING_PAGE 3170 bits (pertaining to all the physical pages) in column 3175 based on that highest priority right-most one position detected in Priority Result 3170. If the successive procedure fails to find a PRIORITY_RESULT bit active (1) for any of the priorities from 0 to 8, then operations load register ADV with all zeroes, and also zero a default bit named CODE_OTHERS.IF PRIORITY_RESULT[0] = 1 THEN ADV= (PRIORITY_SORTING_PAGE_0[0] AND (TYPE[0] XNOR CODE)) & ...& ...& (PRIORITY_SORTING_PAGE_N[0] AND (TYPE[N] XNOR CODE)) ELSE IF PRIORITY_RESULT[1] = 1 THEN ADV= (PRIORITY_SORTING_PAGE_0[1] AND (TYPE[0] XNOR CODE)) & ...& ...& (PRIORITY_SORTING_PAGE_N[1] AND (TYPE[N] XNOR CODE)) ELSE IF PRIORITY_RESULT[2] = 1 THEN ADV= (PRIORITY_SORTING_PAGE_0[2] AND (TYPE[0] XNOR CODE)) & ...& ...& (PRIORITY_SORTING_PAGE_N[2] AND (TYPE[N] XNOR CODE)) ELSE IF PRIORITY_RESULT[8] = 1 THEN ADV= (PRIORITY_SORTING_PAGE_0[8] AND (TYPE[0] XNOR CODE)) & ...& ...& (PRIORITY_SORTING_PAGE_N[8] AND (TYPE[N] XNOR CODE)) ELSE ADV <= 0; OTHERS <= 0; ENDIF;In cases where the Data pages are prioritized, the control signal CODE goes low. As a result, the expression TYPE[N] XNOR CODE is active high for pages having TYPE[N] = 0, meaning Data pages. Then the Priority Result 3170 is generated only from the entries in Priority Sorting Page 3160 pertaining to Data pages. Further, the right-most ones detection on Priority Result 3170 thereby pertains only to the Data pages, and finds the highest priority column for them in Priority Sorting Page 3160. Then the pseuodocode next above loads the Page Wiping Advice register ADV only with entries pertaining to Data pages from that highest priority column.Page Selection Logic 3185 is similarly qualified by the pages allocated for Code to produce the signal to wipe a particular Code page, or alternatively qualified by the pages allocated for Data to produce the signal to wipe a particular Data page.Suppose the dynamic learning process determines that a reallocation is needed to allocate a slot for another Code page. The process is driven by the decrement of DATA_SLOTS to wipe a Data page to make way for another Code page. Accordingly, the process makes the control signal CODE go low, so that Priority Sorting Page 3160 and Priority Result 3170 are processed to supply Data page wiping advice via Mux 3178 into register ADV 3180. The Data-page-related entries fed to register ADV are qualified by action of XNOR 3183. In FIG. 13 the legend CODE/DATA is used to indicate the selective operation with respect to Code pages CODE PP. separate from Data pages DATA PP.Page Selection Logic 3185 responsively directs a wipe of a particular Data page, and signals SDP Swap Manager software to Swap out that Data page if it is Dirty and in any event Swap in a new Code page into the just-wiped page slot in Secure RAM. Operations resume executing the application and servicing it with SDP given the new allocation.Conversely, suppose the dynamic learning process determines that a reallocation is needed to allocate a slot for another Data page. The process is driven by the decrement of CODE_SLOTS to wipe a Code page to make way for another Data page. Accordingly, the process makes the control signal CODE go high, so that Priority Sorting Page 3160 and Priority Result 3170 are processed to supply Code page wiping advice into register ADV 3180. Page Selection Logic 3185 responsively directs a wipe of a particular Code page, and signals SDP Swap Manager software to Swap in a new Data page into the just-wiped page slot in Secure RAM. Operations resume executing the application and servicing it with SDP given the new allocation.Still other embodiments re-group pages. The SDP software mechanism in some embodiments allocates and organizes physical pages of Secure memory such as DATA page into Secure RAM page slot (0->5) and Code page into page slot 6 up to highest-numbered page slot, for example. In some process embodiments that load multiple applications (multi-threaded SDP), some slots are suitably reserved for APP1 or APPn. The SDP mechanism suitably operates when possible to re-group pages that have a relationship or meaning in common.An example of usefulness in re-grouping application pages (app page 1, ..., app page N) can be seen by considering that in a larger system, the pages can have fragmentation problems like those a hard drive can experience. Re-grouping has particular value because it delivers automatic de-fragmentation. "Fragmentation" pertains to a condition of related pages becoming widely separated in a storage medium so that the access process is slowed down. The re-grouping mechanism herein is advantageously applied for paging network resources onto a hard drive, since the SSM machinery is enhanced to not only account for the number of accesses but also the time to access the resources (Hard drive, USB, Network) to build a trusted sorting process.SDP monitors internal RAM space defined by a range of addresses. One type of SDP embodiment defines a range of addresses for all spaces used by SDP such as Flash memory, DRAM, Hard Drive, and other components. These components have fragmentation problems of their own or have pages in all them (e.g., ten pages in internal RAM; forty pages on Flash memory; forty pages on Hard Drive and all these pages used for the same application). In an embodiment, SDP is used to execute on a distant memory location that has small bandwidth, such as a Network location. Cascading several SDP processes is added to such type of SDP embodiment and other SDP embodiments. Fragmentation matters, for example, when switching from one resource to another introduces access timing latency. Thus, re-grouping all pages of an application into one resource or contiguous space is performed to reduce or eliminate such access timing latency.A hit counter performs a count for SDP purposes herein by adding to the number of accesses the time to access the resources (Hard drive, USB, Network). The time to access resources is combined with the hit count, by ResourceHitCount = ResourceHitCount + 1.No running clocks are necessary, so frequency of accesses is used. The arithmetic is suitably either increment a resource count or reset/clear it because the statistic is invalid, and then start hit counts over again. Because SDP supports future code and extensions to the environment, the code behavior is unknown. Therefore, this hardware provides more visibility to what the code does (with respect to frequency of what it does).Some embodiments regard a Code page and an unmodified Data page as similar enough to be given equivalent prioritization and thereby further reduce the relatively modest chip real estate of the prioritization circuitry. Since there are fewer priority levels, the chances of tied (equal) priorities will be higher and more random selection, or otherwise-subsequent page selection, to break a tie will be involved for a given number of Secure RAM physical page slots governed by the SDP. Modified and Unmodified are regarded as the relevant page Types and the TYPE[N] and WR[N] registers are either merged or fed to logic that produces a third merged page type variable MODIF[N]. An example prioritization schedule next still uses a 4-tier Usage Level and reduces the number of priorities by three compared to the TABLE 6 encodings.MODIF=0 page tagged as VERY LOW usageMODIF=0 page tagged as LOW usageMODIF=1 page tagged as VERY LOW usageMODIF=1 page tagged as LOW usageMODIF=0 page tagged as MEDIUM usageMODIF=1 page tagged as MEDIUM usageAnother embodiment has a variable n representing the access timing of the memory. Depending of the value of n, the Statistics counter is decremented by one only each time n accesses occur. This variable n is configured in an SDP register (e.g., 2 bits per page, 00: very fast access timing; 01: fast; 10: medium; 11: slow). The Statistics counter is operated as follows: 00: One access, decrement by one. 01: Two accesses, decrement by one. 10: Four accesses, decrement by one. 11: Eight accesses, decrement by one.Still another embodiment has two or more Priority Result registers 2870A and 2870B and Page Wiping Advice registers 2880A and 2880B muxed for interleaved prioritization fast operations for Code and Data pages respectively.When one of the applications in one type of SDP multi-threading embodiment is terminated, the slot assignments are suitably changed by deactivating the pages of the current running application and re-activating the application that was frozen. For example, the Page Access Counters 2845 are frozen by the de-activation and the statistics are fully operational as if the switch to another application never occurred.Various embodiments are used with one or more microprocessors, each microprocessor having a pipeline is selected from the group consisting of 1) reduced instruction set computing (RISC), 2) digital signal processing (DSP), 3) complex instruction set computing (CISC), 4) superscalar, 5) skewed pipelines, 6) in-order, 7) out-of-order, 8) very long instruction word (VLIW), 9) single instruction multiple data (SIMD), 10) multiple instruction multiple data (MIMD), and 11) multiple-core using any one or more of the foregoing.DESIGN, VERIFICATION AND FABRICATIONVarious embodiments of an integrated circuit improved as described herein are manufactured according to a suitable process of manufacturing 3200 as illustrated in the flow of FIG. 14 . The process begins at step 3205 and a step 3210 preparing RTL (register transfer language) and netlist for a particular design of a page processing circuit including a memory for pages, a processor coupled to the memory, and a hardware page wiping advisor circuit coupled to the processor and operable to prioritize pages based both on page type and usage statistics. The Figures of drawing show some examples, and the detailed description describes those examples and various other alternatives.In a step 3215, the design of the page processing circuit is verified in simulation electronically on the RTL and netlist. In this way, the contents and timing of the memory, of the processor and of the hardware page wiping advisor circuit are verified. The operations are verified pertaining to producing the ACT, WR, TYPE and STAT entries, generating the priority codes for the priority sorting table, sorting the priority codes, generating the page wiping advice ADV, and resolving tied-priority pages. Then a verification evaluation step 3220 determines whether the verification results are currently satisfactory. If not, operations loop back to step 3210.If verification evaluation 3220 is satisfactory, the verified design is fabricated in a wafer fab and packaged to produce a resulting integrated circuit at step 3225 according to the verified design. Then a step 3230 verifies the operations directly on first-silicon and production samples by using scan chain methodology on the page processing circuit. An evaluation decision step 3235 determines whether the chips are satisfactory, and if not satisfactory, the operations loop back as early in the process such as step 3210 as needed to get satisfactory integrated circuits.Given satisfactory integrated circuits in step 3235, a telecommunications unit based on teachings herein is manufactured. This part of the process first prepares in a step 3240 a particular design and printed wiring board (PWB) of the telecommunication unit having a telecommunications modem, a microprocessor coupled to the telecommunications modem, a secure demand paging processing circuitry coupled to the microprocessor and including a secure internal memory for pages, a less-secure, external memory larger than the secure internal memory, and a hardware secure page wiping advisor for prioritizing pages based both on page type and usage statistics and at least one wiping advisor parameter loaded in a step 3245, and a user interface coupled to the microprocessor.The particular design of the page processing circuit is tested in a step 3250 by electronic simulation and prototyped and tested in actual application. The wiping advisor parameters include the usage level tier definitions, any application-specific or all-application static allocation of Secure RAM to Code pages and Data pages. Also, for dynamic learning embodiments, initial application-specific allocations and parameters like DELTA or EPSILON are suitably adjusted.The wiping advisor parameter(s) are adjusted for increased page wiping efficiency in step 3255, as reflected in fast application execution, decreased Swap Rate in executing the same application code, lower power dissipation and other pertinent metrics. If further increased efficiency is called for in step 3255, then adjustment of the parameter(s) is performed in a step 3260, and operations loop back to reload the parameter(s) at step 3245 and do further testing. When the testing is satisfactory at step 3255, operations proceed to step 3270.In manufacturing step 3270, the adjusted wiping advisor parameter(s) are loaded into the Flash memory. The components are assembled on a printed wiring board or otherwise as the form factor of the design is arranged to produce resulting telecommunications units according to the tested and adjusted design, whereupon operations are completed at END 3275.It is emphasized here that while some embodiments may have an entire feature totally absent or totally present, other embodiments, such as those performing the blocks and steps of the Figures of drawing, have more or less complex arrangements that execute some process portions, selectively bypass others, and have some operations running concurrently sequentially regardless. Accordingly, words such as "enable," disable," "operative," "inoperative" are to be interpreted relative to the code and circuitry they describe. For instance, disabling (or making inoperative) a second function by bypassing a first function can establish the first function and modify the second function. Conversely, making a first function inoperative includes embodiments where a portion of the first function is bypassed or modified as well as embodiments where the second function is removed entirely. Bypassing or modifying code increases function in some embodiments and decreases function in other embodiments.A few preferred embodiments have been described in detail hereinabove. It is to be understood that the scope of the invention comprehends embodiments different from those described yet within the inventive scope. Microprocessor and microcomputer are synonymous herein. Processing circuitry comprehends digital, analog and mixed signal (digital/analog) integrated circuits, ASIC circuits, PALs, PLAs, decoders, memories, non-software based processors, and other circuitry, and digital computers including microprocessors and microcomputers of any architecture, or combinations thereof. Internal and external couplings and connections can be ohmic, capacitive, direct or indirect via intervening circuits or otherwise as desirable. Implementation is contemplated in discrete components or fully integrated circuits in any materials family and combinations thereof. Various embodiments of the invention employ hardware, software or firmware. Process diagrams herein are representative of flow diagrams for operations of any embodiments whether of hardware, software, or firmware, and processes of manufacture thereof.While this invention has been described with reference to illustrative embodiments, this description is not to be construed in a limiting sense. Various modifications and combinations of the illustrative embodiments, as well as other embodiments of the invention may be made. The terms "including", "includes", "having", "has", "with", or variants thereof are used in the detailed description and/or the claims to denote non-exhaustive inclusion in a manner similar to the term "comprising". |
A system and technique for detecting a device that requires power is implemented with a power detection station. The power detection system includes a detector having an output and a return which are coupled together by the device when the device requires power. The detector includes a word generator for generating test pulses for transmission to the device via the detector output, and a comparator for comparing the detector output with the detector return. The power detection station has a wide variety of applications, including by way of example, a switch or hub. |
What is claimed is: 1. A power requirement detection system, comprising:a detector having an output and a return; and a device to selectively couple the detector output to the detector return when the device requires power, said detector entering an auto-negotiation mode for establishing a data communication link with the device, said auto-negotiation mode utilizing an auto-negotiation scheme embedded within the IEEE 802.3u clause-28 rules. 2. The power requirement detection system of claim 1 wherein the detector output comprises a signal having a first pulse with a first pulse width and a second pulse with a second pulse width different from the first pulse width, and comprising a filter for passing the first pulse and attenuating the second pulse.3. The power requirement detection system of claim 2 wherein the second pulse width is programmable.4. The power requirement detection system of claim 2 wherein the detector determines that the device requires power when the detector return comprises the first pulse but not the second pulse.5. The power requirement detection system of claim 4 further comprising a power source selectively coupled to the device when the detector determines that the device requires power.6. The power requirement detection system of claim 1 wherein the detector output comprises a signal, and the detector determines that the device requires power when the detector return comprises the signal.7. The power requirement detection system of claim 6 wherein the detector determines that the device requires power only if the detector return comprises the signal within a predetermined time window.8. The power requirement detection system of claim 7 further comprising a power source selectively coupled to the device when the detector determines that the device requires power.9. The power requirement detection system of claim 1 wherein the detector output includes a pseudo random word comprising a plurality of pulses.10. The power requirement detection system of claim 1 wherein the detector output comprises an identifier comprising a plurality of pulses.11. The power requirement detection system of claim 1 further comprising a power source selectively coupled to the device.12. The power requirement detection system of claim 1 wherein the device comprises an IP telephone.13. A method for detecting a device requiring power, comprising:transmitting a pulse to the device; receiving the pulse from the device; detecting whether the device requires power in response to the received pulse; and entering an auto-negotiation mode for establishing a data communication link with the device, wherein the auto-negotiation mode utilizes an auto-negotiation scheme embedded within the IEEE 802.3u clause-28 rules. 14. The method of claim 13 wherein the power requirement detection comprises applying power to the device.15. The method of claim 13 wherein the pulse transmission comprises transmitting a pseudo random word comprising a plurality of pulses.16. The method of claim 13 wherein the pulse transmission comprises transmitting an identifier comprising a plurality of pulses.17. The method of claim 13 wherein the transmitted pulse has a pulse width, the method further comprising transmitting another pulse having a different pulse width to the device.18. The method of claim 17 further comprising programming the second pulse width.19. The method of claim 17 wherein the power requirement detection comprises applying power to the device when the received pulse comprises the transmitted pulse but not said another transmitted pulse.20. The method of claim 13 wherein the power requirement detection comprises applying power to the device when the received pulse comprises the transmitted pulse.21.The method of claim 20 wherein the power requirement detection comprises applying power to the device when the received pulse comprises the transmitted pulse within a predetermined time after the transmitted pulse is transmitted to the device.22. The method of claim 13 wherein the device comprises an IP telephone.23. A method of detecting a device requiring power, comprising:transmitting a test signal over a two-way data transmission line to a DTE; receiving a response signal from the DTE via the two-way data transmission line; operatively coupling a power source to the two-way data transmission line after receipt of the response signal, for providing operating power to the DTE; and entering an auto-negotiation mode for establishing a data communication link with the DTE over the two-way data transmission line, wherein the auto-negotiation mode utilizes an auto-negotiation scheme embedded within the IEEE 802.3u clause-28 rules. 24. The method of claim 23 wherein the response signal comprises the test signal.25. The method of claim 23 wherein the DTE comprises an IP phone.26. The method of claim 23 further comprising entering a DTE detection mode prior to the transmitting.27. The method of claim 23 wherein the step of operatively coupling the power source to the two-way data transmission line occurs only if the response signal comprises the test signal.28. The method of claim 23 wherein the test signal comprises at least one unique fast link pulse word.29. The method of claim 23 wherein the two-way data transmission line comprises one of a category 3, 4 or 5 unshielded twisted pair cable.30. A method of detecting a device requiring power, comprising:receiving a signal from a DTE via a two-way data transmission line; determining, using the signal received, if the DTE requires power; operatively coupling a power source to the two-way transmission line if it is determined that the DTE requires power; and entering an auto-negotiation mode for establishing a data communication link with the DTE over the two-way data transmission line, wherein the auto-negotiation mode utilizes an auto-negotiation scheme embedded within the IEEE 802.3u clause-28 rules. 31. The method of claim 30 further comprising applying a test signal to the two-way data transmission line, and wherein a determination that the DTE requires power occurs if the signal received is an expected signal based on the application of the test signal to the two-way data transmission line.32. The method of claim 31 wherein the expected signal comprises the test signal.33. The method of claim 32 wherein the test signal comprises at least one unique fast link pulse word.34. The method of claim 30 wherein the DTE comprises an IP phone.35. The method of claim 30 further comprising entering a DTE detection mode prior to the receiving.36. The method of claim 30 wherein the two-way data transmission line comprises one of a category 3, 4 or 5 unshielded twisted pair cable.37. A power requirement detection system comprising:a first device for determining, using a test signal, that a DTE requiring power is coupled to a two-way data transmission line;, and a second device for causing the provision of operating power to the DTE requiring power via the two-way data transmission line, at least one of the first device and the second device entering an auto-negotiation mode for establishing a data communication link with the DTE, said auto-negotiation mode utilizing an auto-negotiation scheme embedded within the IEEE 802.3u clause-28 rules. 38. The power requirement detection system of claim 37 wherein the first device comprises a physical layer transceiver, and the second device comprises a controller.39. The power requirement detection system of claim 37 wherein the first device is responsive to the second device for determining that a DTE requiring power is coupled to the two-way data transmission line, and wherein the second device is responsive to the first device for causing the provision of operating power to the DTE requiring power.40. The power requirement detection system of claim 37 wherein the DTE comprises an IP phone.41. The power requirement detection system of claim 37 wherein the two-way data transmission line comprises one of a category 3, 4 or 5 unshielded twisted pair cable.42. A power requirement detection system comprising:a first device that determines, using a test signal, whether a DTE coupled to a two-way data transmission line requires power; and a second device that causes a power source to be coupled to the two-way data transmission line for providing operating power to the DTE if it is determined that the DTE requires power, at least one of the first device and the second device entering an auto-negotiation mode for establishing a data communication link with the DTE, said auto-negotiation mode utilizing an auto-negotiation scheme embedded within the IEEE 802.3u clause-28 rules. 43. The power requirement detection system of claim 42 wherein the first device comprises a physical layer transceiver, and the second device comprises a controller.44. The power requirement detection system of claim 42 wherein the first device is responsive to the second device for making the determination, and wherein the second device is responsive to the first device for causing the power source to be coupled to the two-way data transmission line.45. The power requirement detection system of claim 42 wherein the DTE comprises an IP phone.46. The power requirement detection system of claim 42 wherein the two-way data transmission line comprises one of a category 3, 4 or 5 unshielded twisted pair cable.47. A method of detecting a device requiring power, comprising:determining if a DTE coupled to a two-way data transmission line requires power; providing power to the DTE via the two-way data transmission line if it is determined that the DTE requires power; and entering an auto-negotiation mode for establishing a data communication link with the DTE over the two-way data transmission line, wherein the auto-negotiation mode utilizes an auto-negotiation scheme embedded within the IEEE 802.3u clause-28 rules. 48. The method of claim 47 wherein the DTE comprises an IP phone.49. The method of claim 47 wherein the two-way data transmission line comprises one of a category 3, 4 or 5 unshield twisted pair cable.50. The method of claim 47 further comprising detecting that the DTE is coupled to the two-way data transmission line.51. A method of detecting a device requiring power, comprising:receiving a signal from a two-way data transmission line; determining whether the signal received corresponds to a signal that was transmitted over the two-way data transmission line; coupling a power source to the two-way data transmission line if the signal received corresponds to the signal that was transmitted; and entering an auto-negotiation mode for establishing a data communication link over the two-way data transmission line, wherein the auto-negotiation mode utilizes an auto-negotiation scheme embedded within the IEEE 802.3u clause-28 rules. 52. The method of claim 50 wherein the signal received corresponds to the signal that was transmitted if the signal received comprises the signal that was transmitted.53. The method of claim 51 wherein the signal received comprises at least one unique fast link pulse word.54. The method of claim 51 wherein the two-way data transmission line comprises one of a category 3, 4 or 5 unshielded twisted pair cable.55. A method of detecting a device requiring power, comprising:transmitting a signal over a two-way data transmission line; receiving the signal from the two-way data transmission line; and causing power to be supplied to the two-way data transmission line after receiving the signal; entering an auto-negotiation mode for establishing a data communication link over the two-way data transmission line, wherein the auto-negotiation mode utilizes an auto-negotiation scheme embedded within the IEEE 802.3u clause-28 rules. 56. The method of claim 55 wherein the signal comprises a test signal.57. The method of claim 55 wherein the signal comprises at least one unique fast link pulse word.58. The method of claim 55 wherein the two-way data transmission line comprises one of a category 3, 4 or 5 unshielded twisted pair cable.59. A power requirement detection system comprising:a first device that transmits a signal over a two-way data transmission line; and a second device that causes operating power to be supplied over the two-way data transmission line if the first device receives the signal from the two-way data transmission line, at least one of the first device and the second device entering an auto-negotiation mode for establishing a data communication link with the DTE, said auto-negotiation mode utilizing an auto-negotiation scheme embedded within the IEEE 802.3u clause-28 rules. 60. The power requirement detection system of claim 59 wherein the first device comprises a physical layer transceiver, and the second device comprises a controller.61. The power requirement detection system of claim 59 wherein the two-way data transmission line comprises one of a category 3, 4 or 5 unshielded twisted pair cable.62. The power requirement detection system of claim 59 wherein the signal comprises a test signal.63. The power requirement detection system of claim 59 wherein the signal comprises at least one unique fast link pulse word. |
CROSS-REFERENCE TO RELATED APPLICATIONSThe present application claims priority under 35 U.S.C. [section]119(e) to U.S. Provisional Application No. 60/148,363 filed on Aug. 11, 1999, the contents of which are expressly incorporated herein by reference as though set forth in full.FIELD OF THE INVENTIONThe present invention relates generally to telecommunications systems, and more particularly, to systems and techniques for detecting a device that requires power.BACKGROUND OF THE INVENTIONData terminal equipment (DTE) devices are well known. Examples of DTE devices include any kind of computer, such as notebooks, servers, and laptops; smart VCRs, refrigerators, or any household equipment that could become a smart device; IP telephones, fax machines, modems, televisions, stereos, hand-held devices, or any other conventional equipment requiring power. Heretofore, DTE devices have generally required external power from an AC power source. This methodology suffers from a number of drawbacks including interoperability during power shortages or failure of the external power source. Accordingly, it would be desirable to implement a system where the DTE power is drawn directly from the transmission line. This approach, however, would require a technique for detecting whether a DTE is connected to the transmission line and whether the DTE requires power.SUMMARY OF THE INVENTIONIn one aspect of the present invention, a power detection system includes a detector having an output and a return, and a device to selectively couple the detector output to the detector return when the device requires power.In another aspect of the present invention, a detector having an output and a return includes a word generator coupled to the detector output, and a comparator to compare the detector output with the detector return.In yet another aspect of the present invention, a method for detecting a device requiring power includes transmitting a pulse to the device, receiving the pulse from the device, and detecting whether the device requires power in response to the received pulse.In yet still another aspect of the present invention, a transmission system includes a transmission line interface having at least one port, a two-way transmission line coupled to one of the ports, and a device coupled to the differential transmission line, the device selectively coupling the two-way transmission line together when the device requires power.It is understood that other embodiments of the present invention will become readily apparent to those skilled in the art from the following detailed description, wherein it is shown and described only embodiments of the invention by way of illustration of the best modes contemplated for carrying out the invention. As will be realized, the invention is capable of other and different embodiments and its several details are capable of modification in various other respects, all without departing from the spirit and scope of the present invention. Accordingly, the drawings and detailed description are to be regarded as illustrative in nature and not as restrictive.DESCRIPTION OF THE DRAWINGSThese and other features, aspects, and advantages of the present invention will become better understood with regard to the following description, appended claims, and accompanying drawings where:FIG. 1 shows an exemplary embodiment of the present invention with a detecting station connected to a DTE via a two-way transmission line.FIG. 2 shows an exemplary embodiment of this application with a Fast Ethernet switch having eight detecting stations.FIG. 3 shows a detecting station connected to a DTE, the DTE being modified to include a low-pass filter.FIG. 5 shows the logic that generates the test pulses and compares the test pulses with the received pulses.FIG. 4 shows a detecting station subsection and DTE requiring power.FIG. 6 shows an exemplary embodiment of the low-pass filter.FIG. 7 shows the sequence for DPM detection combined with Auto-Negotiation in a basic embodiment of the invention.FIG. 8 is a flowchart that shows the sequence for DPM detection combined with Auto-Negotiation in a preferred embodiment of the invention.DETAILED DESCRIPTIONIn accordance with a preferred embodiment of the present invention, a detector is utilized to detect the presence of device on a transmission line and whether the device requires power. The device can be data terminal equipment (DTE) or any other device that may require power. Exemplary DTE equipment includes any kind of computer, such as notebooks, servers, and laptops; smart VCRs, refrigerators, or any household equipment that could become a smart device; IP telephones, fax machines, modems, televisions, stereos, hand-held devices, or any other conventional equipment requiring power. If the presence of a DTE requiring power is detected, then the detector can supply power to the DTE.The described embodiment has broad applications. For example, a number of areas can benefit from power delivery over a transmission line including IP Telephony, Web Cameras, Wireless Access Points, Industrial Automation, Home Automation, Security Access Control and Monitoring Systems, Point of Sale Terminals, Lighting Control, Gaming and Entertainment Equipment, Building Management, and any other area where power is required.An exemplary embodiment of the present invention is shown in FIG. 1 with a detecting station 10 connected to a DTE 20 via a two-way transmission line (detector output 30 and detector return 32). The detecting station includes a detector 12, a controller 14, and a power source 16. The detector 12 provides a direct interface to the DTE. The controller 14 initiates control and the detection process. In the preferred embodiment of the invention, the detector is a physical layer transceiver (PHY) with detecting capability. The controller 14 causes the detector 12 to detect whether the DTE 20 is connected to the transmission line and whether the DTE 20 requires power. If the Detector 12 determines that a DTE 20 requiring power is connected to the transmission line, it signals the controller 14. In response, the controller 14 activates the power source 16, thereby providing power to the DTE 20.The DTE includes a relay 22 connected across the two-way transmission line 30, 32. The switches 22a, 22b are used to selectively connect the detector output 30 to the detector return 32 in the power requirement detection mode, and to connect the two-way transmission line 30, 32 to DTE circuitry 28 once power is applied to the DTE 20. Those skilled in the art will appreciate that other devices can be used to selectively connect the detector output 30 to the detector return 32 such as electronic switches and other conventional devices.In operation, the detector 12 determines whether the connected DTE 20 requires power by sending test pulses to the DTE 20. In the default mode (power requirement detection mode), the relay 22 is de-energize causing the detector output 30 to be connected to the detector return 32 through the relay switches 22a, 22b. Thus, any test pulses sent from the detector 10 to the DTE 20 are looped back to the detector 12. The detector 12 determines that the DTE requires power if the test pulses are looped back from the DTE 20 to the detector 10. When the detector 12 determines that the DTE 20 requires power, it signals the controller 14. The controller 14 activates the power source 16, thereby delivering power over the two-way transmission line 30, 32. Once power is applied to the two-way transmission line 30, 32, the relay 22 is energized causing the relay switches 22a, 22b to connect the two-way transmission line 30, 32 to the DTE circuitry 28.The described embodiment of the detector has a wide range of application. For example, the detector could be integrated into a transmission line interface, such as a switch or hub, which links various DTEs onto a local area network (LAN). This application would provide a technique for detecting which DTEs, if any, connected to LAN require power, and providing power over the LAN to those DTE's that require it. FIG. 2 shows an exemplary embodiment of this with a Fast Ethernet switch 51 having eight detecting stations 40, 42, 44, 46, 48, 50, 52, 54. Each detecting station includes a full-duplex 10/100BASE-TX/FX transceiver (not shown). Each transceiver performs all of the Physical layer interface functions for 10BASE-T Ethernet on CAT 3, 4 or 5 unshielded twisted pair (UTP) cable and 100BASE-TX Fast Ethernet on CAT 5 UTP cable. 100BASE-FX can be supported at the output of each detecting station through the use of external fiber-optic transceivers.The detecting stations 40, 42, 44, 46, 48, 50, 52, 54 are connected to a data bus 58. A CPU 60 controls the communication between detecting stations by controlling which detecting stations have access to the data bus 58. Each detecting station has a detector that can be connected to a DTE. In the described embodiment, the detecting stations 40, 42 are not connected to any device. The detecting stations 44, 48 are connected to IP telephones 62, 64. The detecting stations 46, 50, 52 are connected to computers 66, 68, 70. The detecting station 54 is connected to a fax machine 72.In the default mode, each detector of each detecting station sends test pulses to its respective device. Each detector would then wait to see if the test pulses from its respective DTE device is looped back. In the described embodiment, if the IP telephones 62, 64 are the only devices requiring power, then the test pulses will only be looped back to the detecting stations 44, 48. The detecting stations 44, 48 will then deliver power to their respective IP telephones over the transmission line. The computers 66, 68, 70 and the fax machine 72 do not require power, and therefore, will not loop back the test pulses to their respective detectors. As a result, the detecting stations 46, 50, 52, 54 will not deliver power over the transmission line.Although the detector is described in the context of a Fast Ethernet switch, those skilled in the art will appreciate that the detector is likewise suitable for various other applications. Accordingly, the described exemplary application of the detector is by way of example only and not by way of limitation.In the context of a Fast Ethernet switch, it is desirable to configure the detectors to prevent failures of DTE devices in the event that the system is wired incorrectly. For example, in the Fast Ethernet switch application shown in FIG. 2, one skilled in the art could readily recognize that the computer 68, which does not require power, could be inadvertently wired directly to the IP telephone 64. If the IP telephone 64 required power, a switch (see FIG. 1) would connect the two-way transmission line together in the default mode. As a result, the computer 68 would attempt to negotiate data rates with the IP telephone 64 on power up. The data rate negotiation in the described exemplary application is governed by IEEE 802.3u Clause-28 rules, the contents of which are expressly incorporated herein by reference as though set forth in full. This standard dictates an Auto-Negotiation methodology wherein Fast Link Pulses (FLP) having a 100 ns pulse width are transmitted between devices. Accordingly, the FLPs transmitted by the computer 68 would be looped back to the computer 68 through the relay contacts in the IP telephone 64 (see FIG. 1). The computer 68 would interpret these looped back FLPs as data from a device attempting to negotiate a data rate with it. The computer 68 would thus be unable to successfully negotiate a data rate and enter into a continuous loop.To avoid this potential problem, an exemplary embodiment of the present invention utilizes a filter in the front end of the DTE. Turning to FIG. 3, a detecting station 10 is shown connected to a DTE 20'. The detecting station 10 is identical to that described with reference to FIG. 1. However, the DTE 20' has been modified to include a low-pass filter 34 connected between the detector output 30 and the detector return 32 through the relay switches 22a, 22b when the relay 22 is de-energize. The cutoff frequency of the low-pass filter 34 is set to filter out the 100 ns FLPs. Thus, in this embodiment, the detector uses test pulses having pulse widths greater than 100 ns which will pass through the low-pass filter. With this approach, if the computer 68 (see FIG. 2) were inadvertently connected to the IP telephone 64, the 100 ns FLP's transmitted from the computer 68 to the IP telephone 64 would be filtered out by the low-pass filter 34 (see FIG. 3) thereby preventing the computer 68 from entering into a continuous loop.If the system were wired correctly, however, test pulses wide enough to pass the low-pass filter 34 would be looped backed through the DTE 20' to the detecting station 10 indicating a requirement for power.In operation, the detector 10 determines whether the connected DTE 20' requires power by sending test pulses to the DTE 20'. Typically, a 150 ns wide pulse can be used, although those skilled in the art will readily appreciate that the filter can be designed to pass test pulses of any width. Preferably, the pulse width of the test pulses is programmable. The skilled artisan will also recognize that either a single test pulse or a series of test pulses can be used to detect DTEs requiring power. In the context of a Fast Ethernet switch, economy dictates that a 16 bit word conforming to the IEEE 802.3 standards is used. This standard is already supported in the detector 12 and controller 14, and therefore, lends way to easy integration of the tests pulses into the detector 10 without any significant increase in complexity.In the default mode (power requirement detection mode), the relay 22 is de-energize causing the detector output 30 to be connected to the detector return 32 through the relay switches 22a, 22b. Thus, any test pulses sent from the detector 10 to the DTE 20' are looped back to the detector 10 through the filter 34. The detector 12 determines that the DTE requires power if the test pulses are looped back from the DTE 20' to the detector 10. When the detector 12 determines that the DTE 20' requires power, it signals the controller 12. The controller 12 activates the power source 16, thereby delivering power over the two-way transmission line 30, 32. Once power is applied to the two-way transmission line 30, 32, the relay 22 is energized causing the relay switches 22a, 22b to connect the two-way transmission line 30, 32 to the DTE circuitry 28.The 16-bit word generated by the test pulses can be a pseudo random word in the described exemplary embodiment. This approach will significantly reduce the risk that two detectors in the Fast Ethernet switch inadvertently wired together will attempt to power one another. If this inadvertent miswiring were to occur, the chances that the detectors would generate the same 16 bit word such that it would appear at each detector as if their respective test pulses were being looped back is [1/2]<16>. Alternatively, the 16 bit word could be an identifier such as a controller address. In other words, the address would be embedded into the 16 bit word. As a result, if two detectors were inadvertently wired together, the exchange of test pulses between them would not be mistaken as a looped back condition because the controller address of each detecting station is different.To further reduce the risk of one detector mistaking another detector for a DTE, the detector could generate a narrow window in time when it expects to receive test pulsed back after transmission. Thus, unless the two detectors are sending test pulses at or near the same time, a looped back condition would not be detected. For example, using the IEEE 802.3 standard, a 16 bit word is transmitted every 8 ms minimum. If the window is set for the worst case round trip delay of each test pulse say 4 us, then the probability that the other detector would transmit its test pulses in the window is 1/2000.Further reliability can be achieved by sending two groups of test pulses. The first group of test pulses will have sufficiently wide pulse widths such that they pass through the filter of the DTE. The second group of test pulses will be FLPs of 100 ns width as specified in the IEEE 802.3u Clause-28 rules. As a result, only the first group of test pulses will be routed back to the detector. The detector detects the first group of pulses and signals the controller. In response, the controller enables the power source which delivers power to the two-way transmission line.This approach is useful for detecting a short in the two-way transmission line. For example, if the detector output was shorted to the detector return, both the first and the second group of test pulses would be detected by the detector. This information would be signaled to the controller. The controller would process the results concluding that a short in the two-way transmission line has occurred since both the first and second group of test pulses were received. In response, the controller would not enable the power source.FIG. 4 shows a detecting station 10 subsection and a DTE requiring power 20'. The detecting station includes logic 100, transmitter 102, receiver 104, a detector transmit transformer 106, a detector receive transformer 108, and a power source 110. The DTE includes DTE circuitry 120, a receiver 126, a transmitter 124, a DTE receive transformer 116, a DTE transmit transformer 118, a relay 112, and a filter 34.The test pulses are generated by the logic 100 and coupled to the transmitter 102. The output of the transmitter is coupled to the primary winding of the transmit transformer causing the test pulses to be induced into the secondary winding. The secondary winding of the transmit transformer is coupled to a DTE power source. The power source is isolated from the transmitter and receiver to protect their circuitry. The test pulses from the secondary winding of the transmitter are transmitted to the DTE. The wires between the detecting station and the DTE requiring power are shown inFIG. 5 between the dashed lines 122. The test pulses do not energize the relay 112 because the test pulses are AC. The test pulses transmitted to the secondary windings of the DTE transformer are indirect to the primary side of the DTE receive transformer 116.In the absence of power in the DTE, the test pulses on transformers 116, 112 are directed through the low-pass filter 84. The primary winding of the DTE receive transformer 116 is coupled to the primary winding of the DTE transmit transformer 118 through a low-pass filter 34. The test pulses from the DTE receive transformer 116 are directed through filter 34 to the primary winding of the DTE transmit transformer 118. The test pulses are from the primary winding of the DTE transmit transformer are induced into the secondary winding of the DTE transmit transformer 118. The condition of the absence of the power on the DTE, the receive signal passing through the filter to the transmitter side of the DTE is referred to as the loopback condition. The induced test pulses from the secondary winding of the DTE transmit transformer sends pulses on the detector return line. The test pulses on the detector return are coupled to the secondary winding of the detector receive transformer 108, thereby inducing the test pulses into the primary winding of the receiver 104.The logic 100 compares the test pulses sent with the test pulses received. If the test pulses match, then a DTE requiring power has been detected. Once the DTE requiring power is detected, the detector supplies power via the transmission line to the DTE requiring power. The power is directed from a power supply 110 of the detector to the detector output onto the transmission wires. The DTE power sink absorbs the power and the DC power activates the relay 112, thereby closing the switches from the transformers 116, 118 and connecting the detector with the DTE. The power connection to the DTE requiring power 20' is coming from the detector output of the transformer as opposed to the detector side of the DTE requiring power.The power source may have a current limitation in order to prevent hazards in case of a cable short while the detector is powered. The transformers 106, 108, 116, 118 provide isolation between the detector 10 and the DTE requiring power 20'.FIG. 5 shows the logic 100 that generates the test pulses and compares the test pulses with the received pulses. A word generator 84 is coupled to a register 82. The word generator 84 generates the test pulses which in the prescribed exemplary embodiment is a 16-bit word. In the preferred embodiment, the word generator 84 generates a pseudo-random code word. Alternatively, the word generator 84 is designed to generate a unique identifier, which can be a controller identifier. The uniqueness of the word generator output, also referred to as the unique code word, increases the probability of correctly detecting a DTE requiring power through the loopback connection. The controller initiates the detection mode by generating an Initiate Detection trigger 80, which causes the register 82 to latch the output of the word generator 84. The register 82 is coupled to a pulse shaping device such as a digital-to-analog converter (DAC) 86. The DAC is used to shape the pulse. In the preferred embodiment, the DAC generates a link pulse shape in accordance with IEEE 802.3u and IEEE 8802.3 The digital-to-analog converter (DAC) 86 converts the test pulses into analog signals for output to the DTE. The controller indicates the length of the test pulses by writing to register 90. Register 90 determines the length of the test pulses by being coupled to the DAC. In the preferred embodiment, in accordance with IEEE 802.3u and IEEE 8802.3, the typical test pulse is 100 ns wide. By programming register 90, the test pulse width can be widened, such as 20 us or more.A signal detecting device such as an analog-to-digital converter (ADC) converts the DTE output analog signals to digital signals. The ADC is coupled to a register 93. The register 93 is coupled to a comparator 94 and latches the ADC output for use by the comparator 94.The window time period is programmable. The controller programs the time window by writing to the programmable register 91. Register 91 determines the length of the time window by being coupled to timer 92. The timer 92 enables comparing 94 the sent test pulses with the received test pulses for the window time period. If the sent test pulses are the same as the received pulses and the received pulses within the window time, then the comparator indicates a match 95. If the received pulses are not the same as the sent pulses or are not received within the window time, then the comparator indicates a mismatch 97. The purpose of the window time period is to improve the probability of correctly matching sent test pulses with received test pulses and reduce the probability of mis-detecting another detector sending the same unique code word.The logic 100 is controlled via the flow/state diagram in FIGS. 7 and 8 for the basic and preferred embodiments, respectively. In the preferred embodiment, flow/state diagram is embedded within the IEEE standard 802.3u clause 28 auto-negotiation definition and inter-operates with all the devices designed to that standard.In addition to configuring the detector to transmit two groups of test pulses, it is also desirable in certain embodiments of the present invention to implement the power source with current limiting capability in the event of a short circuit in the two-way transmission line.An exemplary embodiment of the low-pass filter is shown FIG. 6. The low-pass filter is a 3-pole filter with a cutoff frequency of 880 kHz. In the described exemplary embodiment, the low pass filter comprises a 7.0 uH inductor 128, two 2 nF capacitors connected in parallel 130, 132, and a zero ohm resistor 134. The zero ohm resistor is a placeholder to show that the values of the inductor, capacitors, and resistor can have different values, such that the cutoff frequency is 880 kHz. Alternatively, the low pass filter can have any cutoff frequency that passes low frequencies.The detector provides support for identifying data terminal equipment capable of accepting power via media dependent interface. Such a DTE is typically connected to a Ethernet switch capable of detecting its presence and able to establish signaling with it. The process of identifying DTE power via MDI capable is termed DPM. The detector provides support for an internet-protocol based telephone, known as IP PHONE. The IP PHONE is one type of DTE.The detector is capable of normal Auto-Negotiation, which is its default state, or a modified Auto-Negotiation when its DPM detection mode is enabled. The Auto-Negotiation scheme is embedded within the IEEE 802.3u Clause-28 rules. Therefore, the detector can be connected to either an IP PHONE or a non-IP PHONE without detriment to the detector operation.When the detector starts Auto-Negotiation and DPM detection is enabled, it sends a unique Fast Link Pulse (FLP) word that is different from a formal FLP word. If the Link partner is DPM capable, it returns this unique FLP word. Otherwise, the detector may receive the Link partner's word instead of the unique FLP word sent. The detector updates a register containing relevant status bits that the controller (Control) can read. The detector continues to send the unique FLP word if no response is received from the Link partner. The controller, at any time, can disable DPM detection and restart Auto-Negotiation to establish normal link with the Link partner.Upon power-up the detector defaults to normal mode, non-DPM detection mode, as per the IEEE 802.3u standard. The detector includes a shadow register, DPM, containing required 'enable' and 'status' bits for DPM support.If the DPM detection mode is enabled, through modifications to the Auto-Negotiation algorithm, the detector sends a unique Fast Link Pulse (FLP) word that is different from a normal FLP word. If the Link partner is a DPM, this unique FLP word externally loops back to the device. Otherwise, the device may receive the Link partner's word instead of its own unique FLP word. The detector is capable of robustly determining if its partner is DTE type or not. Upon determination, the detector updates a register containing relevant status bits that the controller can read. The detector continues to send the unique FLP word if no response is received form a partner. The controller, at any time, can disable the DPM detection mode and restart the Auto-Negotiation to establish normal link with a Link partner.FIG. 7 shows the sequence for DPM detection combined with Auto-Negotiation in a basic embodiment of the invention. Table 1 and 2 show DPM register bits and their description. DPM detection can be reset or restarted along with auto-negotiation or link loss 160. The controller can enable DPM detection by setting the DPMDETEN bit to a "1" and restart Auto-Negotiation by setting ANRSTRT bit to a "1" 162. If these bits are not set, then normal auto-negotiation proceeds 164. When the DPM detection mode is enabled, the device loads an internally generated unique (random) word into the Auto-Negotiation Advertisement register, also called an FLP register 166, and begins to transmit this FLP word 168. In the basic embodiment, while this word is transmitted, link pulses' width can be increased from a normal 100 ns to 150 ns if LPXTND bit is set to a "1". In the preferred embodiment, while this word is transmitted, the link pulse width can be increased from 150 ns to 950 ns, in 100 ns increment per FLPWIDTH register, if LPXTND bit is set to a "1". If LPXTND bit is a "0" then a default link pulse width of 100 ns is used. The wider link pulse enhances the cable reach for the DTE if the external loopback is over CAT 3 cabling.In the basic embodiment, if the unique FLP word is not received from the Link partner, then the detector continues to send the DPM FLP burst 170. If the unique FLP word is received from the Link partner 172, then the detector checks if the sent FLP burst matches the received FLP burst 174. If they match, then the detector sets its DPMSTAT bit to a "1" 176. The received unique FLP word indicates a DPM detection. If it receives any other FLP word, the detector sets its MISMTCH bit to a "1" 178, indicating a non-DPM detection. After it sets either the DPMSTAT or MISMTCH bit, the detector stops auto-negotiation and waits in the TX-Disable state of the Auto-Negotiation arbitrator state machine. The controller polls the mutually exclusive DPMSTAT and MISMTCH bits, to determine if a partner is detected and if the partner is DPM capable. If the partner is a DPM capable, the power to the DTE is supplied through the UTP cable. After the partner has been identified through the DPMSTAT or MISMTCH bit, to establish link with the partner, the DPMDETEN bit should be disabled, and Auto-Negotiation process restarted.In the preferred embodiment, DPM detection can be reset or restarted along with auto-negotiation or link loss 180. The controller can enable DPM detection by setting the DPMDETEN bit to a "1" and restart Auto-Negotiation by setting ANRSTRT bit to a "1" 182. If these bits are not set, then normal auto-negotiation proceeds 184 and the MISMTCH bit is set to "1" and the DPMSTAT bit is set to "0" 86. When the DPM detection mode is enabled, the device loads an internally generated unique (random) word into the Auto-Negotiation Advertisement register, also called an FLP register 188, and begins to transmit this DPM FLP word 190. In the preferred embodiment of the invention, the detector continues to send out an internally generated unique DPM FLP word, FLP burst, during the DPMDETEN mode, until the detector detects energy from the Link partner 192.In the preferred embodiment, when the detector detects energy from the Link partner, the detector takes the checks if an FLP word has been received 194. If no FLP is received, then the detector starts and completes parallel detection 196, sets MISMTCH bit to a "1", sets DPMSTAT to "0" 198, and enters link phase as per the parallel detection. The detector then check whether the received FLP matches the DPM FLP. 100. If the received FLP word does not match the DPM FLP burst then the detector sets MISMTCH bit to a "1", sets DPMSTAT to "0" 198, and completes Auto-Negotiation and enters link phase. If the received FLP word matches the DPM FLP burst then the detector sets DPMSTAT bit to a "1" 202. The detector checks if the DPMCONT bit is set to "1" 204. If DPMCONT bit is a "0" then the sytem stops Auto-Negotiation 206 and waits for the controller before taking further action. If DPMCONT bit is a "1" then the detector sends a DPM FLP burst 208 and monitors the state of receive FLP timer and energy from the Link partner.The detector checks whether the Max FLP Receive timer expired 210. If the Receive FLP timer has expired, then the detector sets the DPMSTAT bit to a "0" 212 and starts over the DPM detection.If the Receive FLP time has not expired, then the detector checks if energy is detected 214. If energy is not detected, then the detector checks if the FLP receive time expired. If energy is detected, then the detector checks whether the FLP has been received 216. If energy is detected from the Link partner but no FLP is received then the sytem starts and completes parallel detection, sets MISMTCH bit to a "1", sets DPMSTAT to "0", and enters link phase as per the parallel detection 196. If an FLP is received, then the detector checks whether the received FLP matches the DPM FLP burst 118. If energy detected from the Link partner is an FLP word and if it matches the DPM FLP burst then the detector returns to sending a DPM FLP burst 108. If energy detected from the Link partner is an FLP word but it does not match the DPM FLP burst then the sytem sets MISMTCH bit to a "1", sets DMPSTAT to "0" 86 and completes Auto-Negotiation and enters link phase.Table 1 gives a bit summary of the register, 0 Fh (15 decimal), in the basic embodiment of the invention. The register, 0 Fh (15 decimal), is considered a shadow register, and is referred to as a DPM register. To access the shadow register, the "Spare Control Enable", bit 7, of register 1 Fh must be set.<tb> <sep>TABLE 1<tb> <sep>DPM Register summary<tb> <sep>ADDR<sep>NAME<sep>15-5<sep>4<sep>3<sep>2<sep>1<sep>0<sep>DEFAULT<tb> <sep>OFh<sep>DPM<sep>Reserved<sep>LPXTND<sep>MISMTCH<sep>DPMSTAT<sep>ANRSTR<sep>DPMDETEN<sep>0000h<tb> <sep>(15d)Table 2 shows a detailed description of the DPM register bits in the basic embodiment of the invention.<tb> <sep>TABLE 2<tb> <sep>DPM REGISTER (ADDRESS OFH, 15D)<tb> <sep>BIT<sep>NAME<sep>R/W<sep>DESCRIPTION<sep>DEFAULT<tb> <sep>15-6<sep>Reserved<sep>RO<sep>Write as "0", Ignore when read<sep>0<tb> <sep>5<sep>DPMWINEN<sep>R/W<sep>0<sep>Windowing<tb> <sep> <sep> <sep> <sep> <sep>scheme<tb> <sep> <sep> <sep> <sep> <sep>enable to<tb> <sep> <sep> <sep> <sep> <sep>reduce ip<tb> <sep> <sep> <sep> <sep> <sep>mis-<tb> <sep> <sep> <sep> <sep> <sep>detection<tb> <sep> <sep> <sep> <sep> <sep>probability<tb> <sep>4<sep>LPXTND: Extend Link Pulse width<sep>R/W<sep>0 = Normal link pulse width (100 ns)<sep>4<tb> <sep> <sep> <sep> <sep>1 = Set Link pulse width to 150 ns<tb> <sep>3<sep>MISMTCH: Word Miss match<sep>RO<sep>1 = Fast Link Pulse Word miss match occurred<sep>0<tb> <sep> <sep> <sep> <sep>during DPM detection<tb> <sep>2<sep>DPMSTAT: Status<sep>RO<sep>1 = Link partner is DPM capable<sep>0<tb> <sep>1<sep>ANRSTRT: Restart<sep>R/W<sep>1 = Restart Auto-Negotiation (identical to Reg. 0<sep>0<tb> <sep> <sep> <sep> <sep>bit 9) but used for DPM detection<tb> <sep>0<sep>DPMDETEN: DPM enable<sep>R/W<sep>1 = Enable DPM detection mode<sep>0LPXTND is Extend Link Pulse width. When this bit is set to a "1", the system increases the FLP width from a normal 100 ns to 150 ns.MISMTCH is Word Mismatch. When DPM detection is enabled, the Link partner's FLP word is compared to the unique FLP word sent. MISMTCH bit is set to a "1" if the comparison fails indicating that the Link Partner is not DPM capable. MISMTCH bit is set to "1" for detecting any legacy Ethernet device: either Auto-Negotiation or forced to 10 or 100 Mbits speed.DPMSTAT is DPM Status, When DPM detection is enabled, the Link partner's FLP word is compared to the unique FLP word sent. If it matches, the Link Partner is DPM capable and TAT bit is set to a "1"ANRSTRT is Restart. This bit, when set to a "1", restarts the Auto-Negotiation. The detector, after power up, is in a non-DPM detection mode. If DPM detection is needed DPMDETEN bit should be set to a "1" and restart the Auto-Negotiation. Auto-Negotiation can also be restarted by setting bit 9 of reg. 0 (Control Register) to a "1".DPMDETEN is DPM detection mode. When this bit is set to a "1", the detector enables DPM detection when Auto-Negotiation is re-started. Otherwise, the system Auto-Negotiates in a non-DPM detection mode as per the IEEE 802.3u standard. When in DPMDETEN mode, if a legacy Ethernet device is detected through either normal Auto-Negotiation Ability Detect or Parallel Detect paths, the Negotiation process continues to a completion, where link between the two stations is established.Table 3 shows a detailed description of the MII register, 0 Fh (15 decimal), referred to as a DPM register and its bits definition in the preferred embodiment of the invention.<tb> <sep>TABLE 3<tb> <sep>DPM Register Summary (Address OFh, 15d)<tb> <sep>ADDR<sep>NAME<sep>15-11<sep>10-7<sep>6<sep>5<sep>4<sep>3<sep>2<sep>1<sep>0<sep>DEFAULT<tb> <sep>OFh<sep>DPM<sep>FLPWIDTH<sep>Reserved<sep>DPMCONT<sep>Reserved<sep>LPXTND<sep>MISMTCH<sep>DPMSTAT<sep>ANRSTR<sep>DPMDETEN<sep>0000h<tb> <sep>(15d)Table 4 shows a detailed description of the MII register, 0 Fh (15 decimal), referred to as a DPM register and its bits definition.<tb> <sep>TABLE 4<tb> <sep>DPM Register (Address OFh, 15d)<tb> <sep> <sep> <sep> <sep> <sep>DE-<tb> <sep>BIT<sep>NAME<sep>R/W<sep>DESCRIPTION<sep>FAULT<tb> <sep>15-<sep>FLPWIDTH[4:0}<sep>R/W<sep>FLP width increment register<sep>0<tb> <sep>11<tb> <sep>10-7<sep>Reserved<sep>RO<sep>Write as "0", Ignore when<sep>0<tb> <sep> <sep> <sep> <sep>read<tb> <sep>6<sep>DPMCONT<sep>R/W<sep>0 = Stop after detecting a<sep>0<tb> <sep> <sep> <sep> <sep>DPM capable<tb> <sep>5<sep>Reserved<sep>RO<sep>Write as "0", Ignore when read<sep>0<tb> <sep>4<sep>LPXTND: Extend<sep>R/W<sep>0 = Normal link pulse width<sep>0<tb> <sep> <sep>Link Pulse width<sep> <sep>(100 ns)<tb> <sep> <sep> <sep> <sep>1 = Set Link pulse width to<tb> <sep> <sep> <sep> <sep>150 ns<tb> <sep>3<sep>MISMTCH: Word<sep>RO<sep>1 = Fast Link Pulse Word<sep>0<tb> <sep> <sep>mismatch<sep> <sep>mismatch<tb> <sep> <sep> <sep> <sep>occurred during DPM detection<tb> <sep> <sep> <sep> <sep>indicating that the link partner<tb> <sep> <sep> <sep> <sep>is a legacy<tb> <sep> <sep> <sep> <sep>device<tb> <sep>2<sep>DPMSTAT: Status<sep>RO<sep>1 = Link partner is DPM<sep>0<tb> <sep> <sep> <sep> <sep>capable<tb> <sep>1<sep>ANRSTRT:<sep>R/W<sep>1 = Restart Auto-Negotiation<sep>0<tb> <sep> <sep>Restart<sep> <sep>(identical to Reg. 0 bit 9) but<tb> <sep> <sep> <sep> <sep>used for DPM detection<tb> <sep>0<sep>DPMDETEN:<sep>R/W<sep>1 - Enable DPM detection<sep>0<tb> <sep> <sep>DPM enable<sep> <sep>modeFLPWIDTH [4:0] is the FLP width in DPMDETEN mode. When the detector is in DPMDETEN mode, if LPEXTND is set for a "1" then the FLP pulse width can be changed from a default 100 ns to 150 ns. The width can be further increased to a maximum of 950 ns in 100 ns increments as specified by the FLPWIDTH, a 5 bits register. Although the FLP width can be theoretically increased to 150+31*100=3250 ns, due to TX magnetic characteristics, it is not recommended to increase the FLP width more than 950 ns.DPMCONT is Continuous DPM Detect Enable. While in DPMDETEN mode if this bit is set to a "1", after initially detecting a DPM capable Link partner, the detector continues to monitor the presence of a DPM capable Link Partner. While in this continuous DPM detection mode, if it detects a non DPM Link partner, the detector establishes a link with the Link partner if possible. FIG. 7 shows the details of the DPM detection procedure combined with Auto-Negotiation.LPXTND is Extend Link Pulse width. When this bit is set for a "1", the detector increases the link pulse width from a normal 100 ns to 150 ns. Additionally, the link pulse width can be increased to a maximum of 950 ns to 100 ns increments per register FLPWIDTH.MISMTCH is Word Mismatch. When DPM detection is enabled, the Link partner's FLP word is compared to the unique FLP word sent. MISMTCH bit is set for a "1" if the comparison fails indicating that the Link Partner is not DPM capable.DPMSTAT is DPM Status. When DPM detection is enabled, the Link partner's FLP word is compared to the unique FLP word sent. If it matches, the Link Partner is DPM capable and DPMSTAT bit is set to a "1".ANRSTRT is Restart. This bit, when set to a "1", restarts the Auto-Negotiation. The detector, after power up, is in a non-DPM detection mode. If DPM detection is needed DPMDETEN bit should be set to a "1" and restart the Auto-Negotiation. Auto-Negotiation can also be restarted by setting bit 9 of reg. 0 (Control Register) to a "1".DPMDETEN is DPM detection enable. When this bit is set to a "1", the detector enables DPM detection when Auto-Negotiation is restarted Otherwise, the detector Auto-Negotiates in a non-DPM detection mode as per the IEEE 802 3u standard.In addition to DPM detection, the detector is capable of generating interrupts to indicate DPMSTAT bit change if interrupt mode is enabled. The detector has a maskable interrupt bit in the MII register 1 Ah. Bit 12, DPMMASK of register 1 Ah, when set to a "1" disables generation of DPMSTST change interrupt. Bit 5, DPMINT, of register 1 Ah indicates that there has been a change in DPMSTAT bit.<tb> <sep>TABLE 5<tb> <sep>Interrupt Register (Address 1Ah, 26d)<tb> <sep>ADDRESS<sep>NAME<sep>15-13<sep>12<sep>11-6<sep>5<sep>4-6<sep>DEFAULT<tb> <sep>1Ah<sep>INTERRUPT<sep>Reserved<sep>DPMMASK<sep>Reserved<sep>DPMINT<sep>Reserved<sep>9F0XhDPMINT is:DPM Interrupt. Bit 5 of MII register 1 Ah, a read only bit, if read as a "1", indicates that there has been a DPMSTAT bit change in the DPM detection process. The change indicated could be from a "0" to a "1" or from a "1" to a "0". Additionally, if interrupt has been enabled and DPMMASK is a "0", then the detector generates an interrupt. Reading of register 1 Ah clears DPMINT bit and interrupt that was caused by DPMSTAT bit change.DPMMASK is DPM Mask. When the detector is in DPMDETEN mode, bit 12 of MII register 1 Ah, when set to a "1" disables any interrupt generated by the DPMSTAT change if interrupt is enabled. However, bit 5, DPMINT, provides a DPMSTAT change regardless of DPMMASK bitThe FIG. 7 flowchart shows the sequence for DPM detection combined with Auto-Negotiation in a basic embodiment of the invention. The FIG. 8 flowchart shows the sequence for DPM detection combined with Auto-Negotiation in a preferred embodiment of the invention.The following items highlight enhancements made in the preferred embodiment of the invention.Link pulse width. In DPMDETEN mode if LPXTND bit is set to a "1", the FLP width is changed from a normal 100 ns to 150 ns. In addition to this, the detector can increase this width in 100 ns increments, as specified by the FLPWIDTH register. A value of "00000"b (default) in the FLPWIDTH register would be equivalent to the basic embodiment of the invention.In the basic embodiment of the invention, if MISMTCH bit is set to a "1" while LPXTND bit is a "I", then the link pulse width remains at 150 ns during normal Auto-Negotiation phase. In the preferred embodiment of the invention, the link pulse width is switched back to 100 ns during normal Auto-Negotiation phase.Continuous DPM detection. The preferred embodiment of the invention incorporates an additional bit DPMCONT. While in DPMDETEN mode if this bit is set to a "1", after initially detecting a DPM capable Link partner, the detector continues to monitor the presence of a DPM capable Link partner. While in this continuous DPM detection mode, if it detects a non-DPM Link partner, the detector establishes a Link partner if possible.FIG. 8 shows the details. In the preferred embodiment, the DPM detection function is identical to the basic embodiment if DPMCONT bit is a "0" (default).Interrupt. The preferred embodiment provides a maskable interrupt for the DPMSTAT bit change. This is enabled by setting DPMMASK, bit 12 of MII register 1Ah, to a "0" if the detector's interrupt bit 14 of MII register 1 Ah is set for a "1". In the preferred embodiment, if DPMMASK is set to a "1" (default) then the detector does not provide DPMSTAT bit change interrupt as is the case in the basic embodiment.DPM DETECTION OPERATIONThe DPM detection process prevents the detector from supplying power to a legacy DTE not equipped to handle power through the MDI. In case the far-end device is not a DTE requiring power, the far-end unit's link detection is unaffected by the DPM detection mechanism. The standard Auto-Negotiation process occurs in parallel to the DPM detection process, enabling detection of non-DTE requiring power devices while DPM detection is enabled. Randomization in the DPM detection algorithm prevents two detection-enabled stations from simultaneously applying power. The DPM detection scheme works over CAT-3, CAT-5, or better cablingThe detector is set to a mode to search for a DTE requiring power. The DTE requiring power's RD pair is effectively connected to the TD pair through a low pass filter. The detector of the detecting station transmits a random code of sufficient uniqueness. The DTE requiring power is detected through the detector of the detecting station receiving its unique random code through the DTE requiring power loopback. Once the detecting station detects the presence of the DTE requiring power, it supplies detector power to the DTE requiring power via an MDI connectionThe detecting station then performs an Auto-Negotiation with the now-powered DTE requiring power. During the detection process, if the detecting station receives valid 10Base-T NLPs, 100Base-TX idles, or Auto-Negotiation FLP code-words, it Auto-Negotiates normally.To prevent a legacy link partner from saturating the detector's port with valid packets when connected to a DTE requiring power without power (DTE requiring power loopback condition), the DTE requiring power receive pair (RD) is effectively connected to its transmit pair (TD) through a low pass filter. This low pass filter cuts-off the legacy link partner's valid data, avoiding network activity. The random code signal used for DTE requiring power detection must be of sufficiently low frequency content to pass through the filter, as well as two worst-case CAT-3 cables. Once the DTE requiring power is applied, the DTE requiring power loopback condition and low pass filter connection are removed and the RD and TD pairs operate normallyFollowing reset, the DPM Detection Mode (DPMDETEN) is disabled and normal, IEEE Standard, Auto-Negotiation process begins. To enable the DPMDETEN mode, firmware must set the DPM Detection Enable bit, DPMDETEN (DPMFON reg, bit 0) to a '1', and then set the Auto-Negotiation Restart bit, ANRSTRT (DPM reg, bit 1) to a '1'.When in the DPMDETEN mode, setting the ANRSTRT bit causes a random sequence to be loaded into the Auto-Negotiation Advertisement Transmit register, and the first FLP word transmitted contains this sequence. While this sequence is transmitted, the link pulses are extended to 1.5 times normal pulse width.While in the DPMDETEN mode, as long as nothing is received from a link partner, the device continues to transmit the above FLP word. Once a link partner FLP burst is received, if it does not match the FLP word from the device, then the link partner is not DPM capable. In this case, the device sets the DPM Mismatched bit, MISMTCH, (DPM reg, bit 3) to a '1'.If the link partner FLP burst received matches the FLP word the device transmitted, it indicates that the device at the other end is a DPM and its relay is closed to loopback the devices transmit data to its receive port. In this case, the device sets the DPM Status bit, DPMSTAT, (DPM reg, bit 2).In either case of detecting a DPM or a normal link partner, the device stops the Auto-Negotiation process and waits in the TX-Disable state of the Auto-Negotiation Arbitrator State Machine. The firmware must take the necessary actions, e.g. power up the DPM, and then in either case, disable the DPMDETEN bit and Restart Auto-Negotiation to establish link with the partner.The DPM register contains both the DPMSTAT and MISMTCH bits. Therefore, polling this register alone provides the necessary status information to indicate either a DPM or a normal link partner.FIRMWARE AND DPM DETECTION HANDSHAKEThe detector is in normal Auto-Negotiation mode upon startup. The Firmware enables the DPMDETEN mode (DPMDETEN bit) and sets the ANRSTRT bit. The detector sends out the DPM random sequence FLP word. While searching for a DPM, if the received FLP burst matches what the detector transmitted, then the remote partner is a DPM. The DPMSTAT bit is set and the Auto-Negotiation process is stopped.On the other hand, while searching for a DPM, if a mismatch between the transmitted and received FLP words occurs, then the remote device is not a DPM. The MISMTCH bit is set and the Auto-Negotiation process is stopped.The firmware monitors the DPMSTAT and MISMTCH bits. Once either of these mutually exclusive status bit is set, the firmware clears the DPMDETEN bit and sets the ANRSTRT bit to complete the normal Auto-Negotiation process in order to link up with either the remote DPM or normal link partner.DPM MIS-DETECTION PROBABILITYIt is possible that the device at the other end also attempts to search of an DPM device using the same DPM Phone Detection procedure. If the link partner is another embodiment of the invention (another system detector), then the chances of both devices sending out an identical FLP word is 1 in 2<14>.To further reduce the mis-detection probability, the detector includes a time windowing scheme. If a matching FLP burst is received within the maximum time allowed for the FLP burst to make a round trip back to its receive port, the DPMSTAT bit is set. In the preferred embodiment, this maximum time is set to 16 us, which is more than the actual maximum round trip time for the longest cable length. The maximum time is programable. Since a device can send out an FLP burst at any time within a 16 ms window, the probability of it sending out the FLP burst in any 16 us span is 1 in 1000. Therefore, the mis-detection probability is 1 in (2<14 >multiplied by 1000), or 1 in 16 million events.When mis-detection does happen, one or both devices erroneously sets the IPSTAT bit. It's then up to the firmware to monitor this mis-detection event and take the appropriate actions.Although a preferred embodiment of the present invention has been described, it should not be construed to limit the scope of the appended claims. For example, the present invention can be implemented by both a software embodiment or a hardware embodiment. Those skilled in the art will understand that various modifications may be made to the described embodiment. Moreover, to those skilled in the various arts, the invention itself herein will suggest solutions to other tasks and adaptations for other applications. It is therefore desired that the present embodiments be considered in all respects as illustrative and not restrictive, reference being made to the appended claims rather than the foregoing description to indicate the scope of the invention. |
The invention relates to an interconnect structure with redundant electrical connectors and associated systems and methods. Semiconductor die assemblies having interconnect structures with redundant electrical connectors are disclosed herein. In one embodiment, a semiconductor die assembly includes a first semiconductor die, a second semiconductor die, and an interconnect structure between the first and the second semiconductor dies. The interconnect structure includes a first conductive film coupled to the first semiconductor die and a second conductive film coupled to the second semiconductor die. The interconnect structure further includes a plurality of redundant electrical connectors extending between the first and second conductive films and electrically coupled to one another via the first conductive film. |
1.A semiconductor die assembly, which includes:A first semiconductor die, which includes a dielectric material, the dielectric material being located above the first semiconductor die;The second semiconductor die; andAn interconnect structure that couples the first semiconductor die to the second semiconductor die, wherein the interconnect structure is between the first semiconductor die and the second semiconductor die, and Wherein the interconnection structure includes-A first conductive film coupled to the first semiconductor die,A second conductive film coupled to the second semiconductor die, andA plurality of redundant electrical connectors extending between the first conductive film and the second conductive film and electrically coupled to each other via the first conductive film,Wherein each of the redundant electrical connectors includes a continuous conductive post, the conductive post is directly connected to the first conductive film and extends through a part of the dielectric material to the same part of the dielectric material Separated location. |
Interconnection structure with redundant electrical connectors and related systems and methodsInformation about divisional applicationThis application is a PCT application whose international application number is PCT/US2015/032216, the filing date is May 22, 2015, and the invention title is "Interconnect structure with redundant electrical connectors and related systems and methods". Later, the divisional application of the Chinese invention patent application with the application number of 201580036987.5.Technical fieldThe disclosed embodiments relate to interconnect structures formed between stacked semiconductor dies in a semiconductor die assembly. In several embodiments, the invention relates to an interconnect structure with redundant conductive connectors.Background techniqueThe packaged semiconductor die (which includes a memory chip, a microprocessor chip, and an imager chip) generally includes a semiconductor die mounted on a substrate and enclosed in a plastic protective cover. The die includes functional features (such as memory cells, processor circuits, and imager devices) and bond pads electrically connected to the functional features. The bonding pad may be electrically connected to a terminal outside the protective cover to allow the die to be connected to an external circuit.In some die packages, semiconductor dies can be stacked on top of each other and electrically connected to each other through interconnects placed between adjacent dies. Metal solder can be used to connect interconnects to bond pads of adjacent dies. However, one challenge of metal solder bonding is that the metal solder cannot always properly bond to interconnects and/or bond pads. Therefore, the interconnects can be open, which can cause the die package to not function properly. This in turn can reduce process yield during manufacturing.Summary of the inventionIn one aspect, the present disclosure relates to a semiconductor die assembly including: a first semiconductor die; a second semiconductor die; and an interconnect structure that couples the first semiconductor die to the second semiconductor A die, wherein the interconnect structure is between the first semiconductor die and the second semiconductor die, and wherein the interconnect structure includes a first conductive film coupled to the first A semiconductor die, a second conductive film, which is coupled to the second semiconductor die, and a plurality of redundant electrical connectors, which extend between the first conductive film and the second conductive film and pass through the The first conductive film is electrically coupled to each other.In another aspect, the present disclosure relates to a semiconductor die assembly, which includes: a first semiconductor die having a first conductive trace; a second semiconductor die having a second conductive trace; and a plurality of redundant Residual electrical connector, which extends between the first conductive trace and the second conductive trace, wherein each of the redundant electrical connectors includes a conductive component coupled to the first conductive trace A conductive trace, wherein the conductive member includes an end portion, and a conductive bonding material between the conductive member and the second conductive trace, wherein the conductive bonding material is bonded to the conductive member The end part.In another aspect, the present disclosure relates to a semiconductor die assembly that includes: a first semiconductor die having conductive traces; a second semiconductor die; and a plurality of conductive components coupled to the conductive traces And extend vertically toward the second semiconductor die, wherein the conductive features are electrically coupled to each other via the conductive traces, and wherein at least one of the conductive features is coupled to the second semiconductor die.In another aspect, the present disclosure relates to a method of forming a semiconductor die assembly, the method comprising: forming a first conductive film on a first semiconductor die; forming a second conductive film on a second semiconductor die; Forming a plurality of redundant electrical connectors on the first conductive film; and coupling the redundant electrical connectors to the second conductive film.In another aspect, the present disclosure relates to a method of forming a semiconductor die assembly, including: forming a first conductive trace on a first semiconductor die; forming a protrusion away from the first conductive trace on the first conductive trace A plurality of conductive features of a semiconductor die; placing a conductive bonding material on each of the conductive features; and reflowing the conductive bonding material to couple individual ones of the plurality of conductive features to a second semiconductor The second conductive trace of the die.Description of the drawingsFigure 1 is a cross-sectional view of a semiconductor die assembly configured in accordance with an embodiment of the present invention.2A is an enlarged cross-sectional view of a semiconductor device including an interconnect structure configured in accordance with an embodiment of the present invention.Figure 2B is a cross-sectional view illustrating certain failure modes of solder joints that can occur during manufacturing.Fig. 3 is a top plan view showing an interconnection structure configured according to another embodiment of the present invention.4A to 4H are cross-sectional views illustrating semiconductor devices in various stages of a method for manufacturing an interconnection structure according to selected embodiments of the present invention.Figure 5 is a schematic diagram of a system including a semiconductor die assembly configured in accordance with an embodiment of the present invention.detailed descriptionSpecific details of several embodiments of stacked semiconductor die assemblies having interconnect structures with redundant electrical connectors and related systems and methods are described below. The terms "semiconductor device" and "semiconductor die" generally refer to solid-state devices containing semiconductor materials, such as logic devices, memory devices, or other semiconductor circuits, components, and so on. In addition, the terms "semiconductor device" and "semiconductor die" may refer to a finished device or an assembly or other structure in various processing stages before becoming a finished device. The term "substrate" may refer to a wafer-level substrate or a singulated die-level substrate, depending on the context in which the term is used. Those skilled in the relevant art will recognize that suitable steps of the methods described herein can be performed at the wafer level or at the die level. In addition, if the context does not indicate otherwise, conventional semiconductor manufacturing techniques can be used to form the structures disclosed herein. The material may be deposited, for example, using chemical vapor deposition, physical vapor deposition, atomic layer deposition, spin coating, and/or other suitable techniques. Similarly, plasma etching, wet etching, chemical mechanical planarization, or other suitable techniques may be used to remove material, for example. Those skilled in the relevant art should also understand that the present invention may have additional embodiments, and the present invention may be practiced without several details of the embodiments described below with reference to FIGS. 1 to 5.As used herein, the terms "vertical", "lateral", "upper" and "lower" may refer to the relative direction or position of features in the semiconductor die assembly in view of the orientation shown in the figures. For example, "upper" or "uppermost" may refer to the feature being positioned closer to the top of the page than another feature. However, these terms should be interpreted broadly to encompass semiconductor devices with other orientations.Figure 1 is a cross-sectional view of a semiconductor die assembly 100 ("assembly 100") configured in accordance with an embodiment of the present invention. The assembly 100 includes a stack of first semiconductor die 102a carried by a second semiconductor die 102b (collectively referred to as "semiconductor die 102"). The second semiconductor die 102b is carried by the interposer 120. The interposer 120 may include, for example, a semiconductor die, a dielectric spacer, and/or another suitable substrate, which has electrical connectors (not shown) connected between the interposer 120 and the packaging substrate 125, such as through holes , Metal traces, etc. The package substrate 125 may include, for example, an interposer, a printed circuit board, another logic die, or another suitable substrate, which is connected to a package contact 127 that electrically couples the assembly 100 to an external circuit (not shown) ( Such as bonding pads) and electrical connectors 128 (such as solder balls). In some embodiments, the package substrate 125 and/or the interposer 120 may be configured differently. For example, in some embodiments, the interposer 120 may be omitted and the second semiconductor die 102b may be directly connected to the packaging substrate 125.The assembly 100 may further include a thermally conductive housing 110 ("housing 110"). The case 110 may include a cover part 112 and a wall part 113 attached to or integrally formed with the cover part 112. The cover portion 112 may be attached to the topmost first semiconductor die 102a by a first bonding material 114a (for example, an adhesive). The wall portion 113 may extend vertically away from the cover portion 112 and be attached to the peripheral portion 106 of the first semiconductor die 102a (called "porch" or "porch" by those skilled in the art) by a second bonding material 114b (such as an adhesive). vehicle"). In addition to providing a protective cover, the housing 110 can act as a heat sink to absorb thermal energy and dissipate the thermal energy from the semiconductor die 102. Correspondingly, the housing 110 may be made of thermally conductive materials (such as nickel (Ni), copper (Cu), aluminum (Al), ceramic materials with high thermal conductivity (such as aluminum nitride), and/or other suitable thermally conductive materials) .In some embodiments, the first bonding material 114a and/or the second bonding material 114b may be referred to in the art as a "thermal bonding material" or "TIM" (which is designed to increase surface bonding (for example, on the surface of the die) The contact between the radiators) is made of materials with thermal conductivity). The TIM may include silicone-based grease, gel, or adhesive doped with conductive materials (such as carbon nanotubes, solder materials, diamond-like carbon (DLC), etc.) and phase change materials. In other embodiments, the first bonding material 114a and/or the second bonding material 114b may include other suitable materials, such as metal (for example, copper) and/or other suitable thermally conductive materials.Some or all of the first and/or second semiconductor die 102 may be at least partially encapsulated in the dielectric underfill material 116. The underfill material 116 may be deposited or otherwise formed around part or all of the die and/or between some or all of the die to enhance the mechanical connection with the die and/or provide conductive features and/or Or electrical isolation between structures (such as interconnects). The underfill material 116 may be a non-conductive epoxy resin paste, a capillary underfill material, a non-conductive film, a molded underfill material, and/or include other suitable electrically insulating materials. In several embodiments, the underfill material 116 may be selected based on the thermal conductivity of the underfill material 116 to enhance heat dissipation through the die of the assembly 100. In some embodiments, an underfill material 116 may be used instead of the first bonding material 114a and/or the second bonding material 114b to attach the cover 110 to the topmost first semiconductor die 102a.The semiconductor dies 102 may each be formed of a semiconductor substrate (e.g., silicon, silicon-on-insulator, compound semiconductor (e.g., gallium nitride), or other suitable substrate). The semiconductor substrate can be cut or singulated into any of various integrated circuit components or functional characteristics (such as dynamic random access memory (DRAM), static random access memory (SRAM), flash memory or other Forms of integrated circuit devices, which include memory, processing circuits, imaging components and/or other semiconductor devices) semiconductor die. In selected embodiments, the assembly 100 may be configured as a hybrid memory cube (HMC), where the first semiconductor die 102a provides a data storage device (eg, a DRAM die) and the second semiconductor die 102b provides memory within the HMC Control (such as DRAM control). In some embodiments, the assembly 100 may include other semiconductor dies in addition to one or more of the semiconductor dies 102, and/or include other semiconductor dies instead of one or more of the semiconductor dies 102 By. For example, such semiconductor die may include integrated circuit components in addition to data storage devices and/or memory control components. In addition, although the assembly 100 includes 9 dies stacked on the interposer 120, in other embodiments, the assembly 100 may include less than 9 dies (for example, 6 dies) or more than 9 dies ( For example, 12 dies, 14 dies, 16 dies, 32 dies, etc.). For example, in an embodiment, the assembly 100 may include 4 memory dies stacked on 2 logic dies. Furthermore, in various embodiments, the semiconductor die 102 may have different sizes. For example, in some embodiments, the second semiconductor die 102b may have the same footprint as at least one of the first semiconductor die 102a.As further shown in FIG. 1, the assembly 100 further includes: a plurality of first conductive traces 140a ("first traces 140a") located on the first side 109a (eg, the front side) of the semiconductor die 102; A plurality of second conductive traces 140b ("second traces 140b"), which are located on the second side 109b (for example, the back side) of the semiconductor die 102; and a plurality of interconnect structures 130, which make individual first traces The line 140a and the individual second trace 140b are coupled to each other. Each of the first trace 140a and the second trace 140b may include, for example, conductive lines, conductive plates, or other conductive structures that extend laterally across one side of the semiconductor die 102. In the illustrated embodiment, the first trace 140a and the second trace 140b are coupled to corresponding through-substrate vias (TSV) 142. The TSV is configured to couple the first trace 140a and the second trace 140b at opposite sides of the semiconductor die 102 to each other. As shown in the figure, the TSV 142 may be disposed toward the center of the semiconductor die 102, and the first trace 140a and the second trace 140b may expand outward from the TSV 142 and toward the interconnect structure 130. However, in other embodiments, the TSV 142, the first trace 140a and the second trace 140b, and/or the interconnect structure 130 may be arranged differently.The interconnect structures 130 may each include a plurality of redundant electrical connectors 134 ("redundant connectors 134") coupled between individual first traces 140a and individual second traces 140b of adjacent semiconductor dies 102. Therefore, the first trace 140a and the second trace 140b of each pair are electrically and thermally coupled together through a plurality of redundant connectors 134. In one aspect of this embodiment, redundant connectors 134 can improve process yield during manufacturing. For example, as described in more detail below, the individual structures 130 are less prone to open circuits relative to conventional interconnects or other electrical connectors because there are multiple redundant connectors spaced apart from each other along the traces 140a and 140b. In another aspect of this embodiment, the redundant connector 134 can enhance heat conduction through the stack of semiconductor die 102 and toward the cover portion 112 of the housing 110. In particular, the redundant connectors 134 can provide multiple heat transfer paths between adjacent semiconductor dies 102. In some embodiments, the redundant connectors 134 may be spaced apart from each other along the individual traces 140 a and 140 b to dissipate heat laterally across the semiconductor die 102. In additional or alternative embodiments, additional redundant electrical connectors 138 (shown in dashed lines) may be on the inner portion of the semiconductor die 102 (e.g., between TSVs 142) and/or on the outer portion (e.g., toward the die 102). Between the edges) to further dissipate heat.FIG. 2A is an enlarged view of a semiconductor device 205 having an interconnect structure 230 configured in accordance with an embodiment of the present invention. As shown in the figure, the interconnect structure 230 includes a plurality of redundant circuits extending between a first semiconductor substrate 204a (such as a semiconductor wafer or die) and a second semiconductor substrate 204b (such as a semiconductor wafer or die). Connector 234 ("Redundant Connector 234"). Each of the redundant connectors 234 includes a conductive feature or post 232 coupled to the first conductive film or first trace 240a of the first substrate 204a. The redundant connector 234 also includes a second conductive member or bond pad 233 (e.g., bump bond pad) coupled to the second conductive film or second trace 240b on the second substrate 204b. The conductive bonding material 235 may form a conductive joint that couples the bonding pad 233 to the end portion 237 of the corresponding pillar 232. The conductive bonding material 235 may include, for example, solder (for example, metal solder), conductive epoxy, or conductive paste.Generally speaking, one challenge of solder bonding materials is that they cannot properly bond interconnects to bonding pads. For example, FIG. 2B shows several failure modes of the solder bonding material 295. The first failure mode F1 occurs when the interconnect 292 has a height that is less than the height of an adjacent interconnect (not shown). In this failure mode, the large gap between the interconnect 292 and its corresponding bonding pad 293 prevents the bonding material 295 from contacting the bonding pad 293. The second failure mode F2 occurs when residual contaminants (not shown) on the interconnect 292 and/or the bonding pad 293 prevent the bonding material 295 from wetting to the interconnect 292 and/or the bonding pad 293. The third failure mode F3 can be attributed to solder wicking that occurs during reflow or other heating processes. Specifically, the solder wicking effect occurs when the surface tension attracts (heated) the bonding material 295 toward the sidewall 296 of the interconnect 292 and away from the bonding pad 293. The fourth failure mode F4 involves cracking or rupture of the bonding material 295 between the interconnection 292 and the bonding pad 293. Cracking can occur, for example, when the solder material consumes (ie, reacts with) certain materials of the interconnect (eg, palladium (Pd)) and causes the bonding material 295 to become brittle and easily broken.However, interconnect structures configured in accordance with several embodiments of the present invention can address these and other limitations of conventional interconnects and related structures. Referring again to FIG. 2A, the redundant connector 234 is configured so that even if some connectors 234 fail (for example, through one of the failure modes F1 to F4), as long as at least one of the other redundant connectors 234 remain connected To the first trace 240a and the second trace 240b, the interconnection structure 230 will not fail. In the embodiment shown in FIG. 2A, for example, up to 4 redundant connectors 234 can fail without opening the interconnect structure 230. In other embodiments, the interconnect structure 230 may have a different number of redundant connectors, such as more than 5 redundant connectors (for example, 6, 8, 10, or more than 10 connectors) or less than 5 redundant connectors. More connectors (for example, 2, 3, or 4 connectors). In several embodiments, the number of redundant connectors can be selected to improve the expected process yield during manufacturing. For example, in some examples, an interconnect structure with 3 redundant connectors can increase the process yield by 0.5%, while 4 redundant connectors can only increase the yield by an additional 0.05%. In this scenario, the 3-connector configuration can be a more acceptable design than the 4-connector configuration because the expected difference in process yield is negligible.Another advantage of the interconnect structure of various embodiments is that the redundant electrical connector can reduce the current density through the conductive joint (eg, through the bonding material 235 of the redundant interconnect 234). For example, an interconnect structure with 10 redundant connectors can reduce the current density through each of its conductive contacts by about 10 times. The related advantage is that lower current density can reduce electromigration. For example, lower current density can reduce electromigration through tin/silver-based (SnAg) solder joints, which are generally more susceptible to electromigration than other interconnect materials (such as copper). In some embodiments, the number of redundant electrical connectors may be selected to achieve some reduction in electromigration balanced with a potential increase in capacitance across the interconnect structure.A further advantage of the interconnect structure of the various embodiments is that redundant electrical connectors can be closely packed. For example, FIG. 3 is a top plan view showing a close-packed redundant electrical connector 334 ("redundant connector 334") corresponding to the interconnect structure 330 configured according to another embodiment of the present invention. As shown in the figure, the redundant connectors 334 are each formed on the conductive traces 340 of the corresponding interconnect structure 330. The redundant connectors 334 each have a diameter d1 and are separated from each other by a separation distance s1. In one embodiment, the size of the diameter d1 may be approximately the same as the separation distance s1. In another embodiment, the separation distance s1 may be smaller than the diameter d1. For example, the separation distance s1 can be less than 75% of d1, less than 50% of d1, or less than 25% of d1. In contrast, conventional interconnects cannot be densely packed in this way because of the risk that metal solder can bridge the interconnects and cause electrical shorts. However, because redundant connectors 334 are electrically coupled to each other (ie, via conductive traces 340), electrical shorting does not pose this risk.4A to 4H are partial cross-sectional views illustrating the semiconductor device 405 in various stages of a method for manufacturing an interconnect structure according to selected embodiments of the present invention. Referring first to FIG. 4A, the semiconductor device 405 includes a first substrate 404a (such as a silicon wafer or die) and a first dielectric material 450a (such as silicon oxide) formed on the first substrate 404a. The first dielectric material 450a is patterned to expose substrate contacts 407 (eg, copper bond pads). The first dielectric material 450a may also be patterned to expose other substrate contacts (not shown) of the first substrate 404a, such as an integrated circuit (IC) device (such as a memory; not shown) connected to the first substrate 404a的substrate contacts. The semiconductor device 405 further includes a patterned first conductive film or first conductive trace 440a (such as a copper or copper alloy film) formed on the first dielectric material 450a and the substrate contact 407.4B shows the semiconductor device 405 after forming a mask 460 (eg, a photoresist mask, a hard mask, etc.) and an opening 452 in the first dielectric material 450a. The opening 452 may be formed by removing (eg, etching) a portion of the first dielectric material 450a through the corresponding mask opening 461. As shown in FIG. 4B, the opening 452 may expose the portion underlying the first conductive trace 440a.4C shows the semiconductor device 405 after forming conductive features or pillars 432 on the first conductive trace 440a. In some embodiments, the seed material 472 (e.g., copper) can be deposited on the sidewalls 462 of the mask opening 461 (FIG. 4B) and then the conductive material 470 (e.g., copper) is electroplated onto the seed material 472. The column 432 is formed. In the illustrated embodiment, the barrier material 474 (for example, nickel) and the interface material 475 (for example, palladium) may also be sequentially electroplated onto the conductive material 470. In other embodiments, other deposition techniques (such as sputtering) may be used instead of electroplating.4D shows the semiconductor device 405 after the opening 408 is formed in the first substrate 404a and the protective material 463 is formed on the pillar 432. As shown in the figure, the opening 408 extends through the first substrate 404a and exposes a portion of the substrate contact 407 toward the bottom of the opening 408. In several embodiments, the opening 408 may be formed by first thinning the first substrate 404a (e.g., via etching, back grinding, etc.) and then removing the substrate material (e.g., via etching). In the illustrated embodiment, a protective material or protective film 463 (such as a polymerized film) can protect the pillar 432 during manufacturing.4E shows the semiconductor device 405 after forming the TSV 442, the second dielectric material 450b, and the second conductive film or second conductive trace 440b. The TSV 442 may be formed by filling the opening 408 (FIG. 4D) in the first substrate 404a with a conductive material 476 (for example, copper or copper alloy). In several embodiments, the second conductive trace 440b and the second dielectric material 450b may be formed in a manner similar to that of the first conductive trace 440a and the first dielectric material 450a.4F shows the semiconductor device 405 after the mask 465 and the opening 453 are formed in the second dielectric material 450b. The opening 453 may be formed by removing (eg, etching) a portion of the second dielectric material 450b through the corresponding mask opening 466. As shown in Figure 4F, the opening 453 in the second dielectric material 450b may expose the portion underlying the second conductive trace 440b.4G shows the semiconductor device 405 after forming conductive features or bond pads 433 on the second conductive trace 440b. Similar to the pillar 432, a seed material 477 (such as copper) can be deposited on the sidewall 467 of the mask opening 466 (FIG. 4F) and/or the second conductive trace 440b and then the conductive material 478 (such as copper) It is electroplated onto the seed material 477 to form the bonding pad 433. In some embodiments, the bonding pad 433 may include a barrier material 484 (such as nickel) and an interface material 485 (such as palladium) that are sequentially electroplated onto the conductive material 478.FIG. 4H shows the semiconductor device 405 after removing the mask 465 and the protective film 463 (FIG. 4G) and forming a bonding material 435 (eg, metal solder) on the end portion 437 of the pillar 432. In one embodiment, the bonding material 435 may be a plating material. In another embodiment, the bonding material 435 may be in the form of solder balls. In either case, the bonding material 435 may be heated (eg, reflowed) and brought into contact with the corresponding bonding pad 433 of the second substrate 404b. After reflow, the bonding material 435 may be allowed to cool and solidify into a conductive joint that attaches the post 432 to the bonding pad 433. In some embodiments, the structure and function of the bonding pad 433 may be substantially similar to the structure and function of the bonding pad 433 (FIG. 4G) of the first substrate 404a.Any of the interconnect structures and/or semiconductor die assemblies described above with reference to FIGS. 1 to 4H can be incorporated into many larger and/or more complex systems (the representative example is schematically shown in FIG. 5 Any of the system 590) shown. The system 590 may include a semiconductor die assembly 500, a power supply 592, a driver 594, a processor 596, and/or other subsystems or components 598. The semiconductor die assembly 500 may include features that are substantially similar to those of the stacked semiconductor die assembly described above, and may therefore include various features that enhance heat dissipation. The resulting system 590 can perform any of various functions, such as memory storage, data processing, and/or other suitable functions. Therefore, the representative system 590 may include (but is not limited to) handheld devices (such as mobile phones, tablet computers, digital readers, and digital audio players), computers, and electrical appliances. The components of system 590 may be housed in a single unit or distributed across multiple interconnected units (e.g., via a communication network). The components of system 590 may also include remote devices and any of a variety of computer-readable media.It should be understood from the foregoing that the specific embodiments of the present invention have been described herein for illustrative purposes, but various modifications can be made without departing from the present invention. For example, although several of the embodiments of semiconductor die assemblies are described with respect to HMC, in other embodiments, the semiconductor die assemblies may be configured as other memory devices or other types of stacked die assemblies . In addition, although certain features or components have been shown as having certain arrangements or configurations in the illustrated embodiments, other arrangements and configurations are possible. For example, although the TSV 442 (FIG. 4E) is formed after the front end metallization (ie, after the substrate contact 407 is formed) in the illustrated embodiment, in other embodiments, it may be before the front end metallization Or form TSV 442 simultaneously with the metallization of the front end. Furthermore, although the posts are bonded to the bump pads in the illustrated embodiment, in other embodiments, the posts may be bonded to other structures or directly to the conductive traces. In addition, although advantages related to certain embodiments of the new technology have been described in the context of the embodiments, other embodiments may also exhibit such advantages and it is not necessary that all embodiments exhibit falling within the scope of the present technology. Of such advantages. Therefore, the present invention and related technologies may cover other embodiments that are not clearly shown or described herein. |
A device includes a routing buffer (48). The routing buffer (48) includes a first port configured to receive a signal relating to an analysis of at least a portion of a data stream. The routing buffer (48) also includes a second port configured to selectively provide the signal to a first routing line of a block (32) of a state machine at a first time. The routing buffer (48) further includes a third port configured to selectively provide the signal to a second routing line of the block (32) of the state machine at the first time. |
CLAIMS What is claimed is: 1. A device, comprising: a routing buffer comprising: a first port configured to receive a signal relating to an analysis of at least a portion of a data stream; a second port configured to selectively provide the signal to a first routing line of a block of a state machine at a first time; and a third port configured to selectively provide the signal to a second routing line of the block of the state machine at the first time. 2. The device of claim 1 , wherein the routing buffer comprises a bi-directional drive element coupled between the first port and the second port. 3. The device of claim 2, wherein the routing buffer comprises a first control input configured to activate the bi-directional drive element to provide the signal from the second port. 4. The device of claim 1 , wherein the routing buffer comprises a uni-directional drive element coupled between the first port and the third port. 5. The device of claim 4, wherein the routing buffer comprises a second control input configured to activate the uni-directional drive element to provide the signal from the third port. 6. The device of claim 1 , wherein the signal comprises a first signal and wherein the routing buffer further comprises a fourth port configured to selectively provide a second signal relating to an analysis of at least a portion of the data stream received at the second port at a second time, wherein the first port is configured to selectively provide the second signal at the second time. 7. The device of claim 6, wherein the routing buffer comprises a third control input configured to activate a bi-directional drive element to provide the second signal from the first port. 8. The device of claim 7, wherein the routing buffer comprises a uni-directional drive element coupled between the second port and the fourth port. 9. The device of claim 8, wherein the routing buffer comprises a fourth control input configured to activate the uni-directional drive element to provide the second signal from the fourth port. 10. A device, comprising: a state machine comprising: a plurality of blocks, each of the blocks comprising: a plurality of rows, each of the rows comprising a plurality of programmable elements, each of the programmable elements configured to analyze at least a portion of a data stream and to selectively output the result of the analysis; and an intra-block switch configured to selectively route the result; and a routing buffer coupled to one of the blocks and configured to: receive the result from the intra-block switch at a first port; and selectively provide the result from a second port and a third port of the routing buffer simultaneously. 10. The device of claim 9, wherein the intra-block switch comprises a plurality of row routing lines configured to be selectively coupled to the programmable elements and configured to provide the results from the programmable elements. 11. The device of claim 10, wherein the intra-block switch comprises a plurality of block routing lines configured to be selectively coupled to the plurality of row routing lines. 12. The device of claim 1 1, wherein the intra-block switch comprises a plurality of junction points configured to selectively couple the block routing lines to the plurality of row routing lines. 13. The device of claim 9, wherein the routing buffer comprises a bi-directional drive element coupled between the first port and the second port. 14. The device of claim 13, wherein the routing buffer comprises a control input configured to activate the bi-directional drive element to provide the result from the second port. 15. The device of claim 9, wherein the routing buffer comprises a uni-directional drive element coupled between the first port and the third port. 16. The device of claim 15, wherein the routing buffer comprises a control input configured to activate the uni-directional drive element to provide the first signal from the third port. 17. A method, comprising: receiving at a first port of a routing buffer a signal relating to an analysis of at least a portion of a data stream; providing from a second port of the routing buffer the signal to a first block routing line of a block of a state machine at a first time; and providing from a third port of the routing buffer the signal to a second block routing line of the block of the state machine at the first time. 18. The method of claim 17, comprising receiving at an intra-block switch of the block of the state machine the signal from both the first block routing line and the second block routing line. 19. The method of claim 18, comprising: providing the signal from the first block routing line to a first row routing line in the intra-block switch; andproviding the signal from the second block routing line to a second row routing line in the intra-block switch. 20. The method of claim 19, comprising: providing the signal from the first row routing line to a first programmable element; and providing the signal from the second row routing line to a second programmable element. 21. The method of claim 20, comprising: activating the first programmable element responsive to the signal; and activating the second programmable element responsive to the signal. |
METHODS AND SYSTEMS FOR ROUTING IN A STATE MACHINE BACKGROUND Field of Invention [0001] Embodiments of the invention relate generally to electronic devices and, more specifically, in certain embodiments, to electronic devices with parallel finite state machines for pattern-reco gnition . Description of Related Art [0002] Complex pattern recognition can be inefficient to perform on a conventional von Neumann based computer. A biological brain, in particular a human brain, however, is adept at performing pattern recognition. Current research suggests that a human brain performs pattern recognition using a series of hierarchically organized neuron layers in the neocortex. Neurons in the lower layers of the hierarchy analyze "raw signals" from, for example, sensory organs, while neurons in higher layers analyze signal outputs from neurons in the lower levels. This hierarchical system in the neocortex, possibly in combination with other areas of the brain, accomplishes the complex pattern recognition that enables humans to perform high level functions such as spatial reasoning, conscious thought, and complex language. [0003] In the field of computing, pattern recognition tasks are increasingly challenging. Ever larger volumes of data are transmitted between computers, and the number of patterns that users wish to identify is increasing. For example, spam or malware are often detected bysearching for patterns in a data stream, e.g., particular phrases or pieces of code. The number of patterns increases with the variety of spam and malware, as new patterns may be implemented to search for new variants. Searching a data stream for each of these patterns can form a computing bottleneck. Often, as the data stream is received, it is searched for each pattern, one at a time. The delay before the system is ready to search the next portion of the data stream increases with the number of patterns. Thus, pattern recognition may slow the receipt of data. [0004] Hardware has been designed to search a data stream for patterns, but this hardware often is unable to process adequate amounts of data in an amount of time given. Some devices configured to search a data stream do so by distributing the data stream among a plurality of circuits. The circuits each determine whether the data stream matches a portion of a pattern. Often, a large number of circuits operate in parallel, each searching the data stream at generally the same time. However, there has not been a system that effectively allows for performing pattern recognition in a manner more comparable to that of a biological brain. Development of such a system is desirable. BRIEF DESCRIPTION OF DRAWINGS [0005] FIG. 1 illustrates an example of system having a state machine engine, according to various embodiments of the invention. [0006] FIG. 2 illustrates an example of an FSM lattice of the state machine engine of FIG. 1, according to various embodiments of the invention. [0007] FIG. 3 illustrates an example of a block of the FSM lattice of FIG. 2, according to various embodiments of the invention.[0008] FIG. 4 illustrates an example of a row of the block of FIG. 3, according to various embodiments of the invention. [0009] FIG. 5 illustrates an example of a Group of Two of the row of FIG. 4, according to various embodiments of the invention. [0010] FIG. 6 illustrates an example of a finite state machine graph, according to various embodiments of the invention. [0011] FIG. 7 illustrates an example of two-level hierarchy implemented with FSM lattices, according to various embodiments of the invention. [0012] FIG. 8 illustrates an example of a method for a compiler to convert source code into a binary file for programming of the FSM lattice of FIG. 2, according to various embodiments of the invention. [0013] FIG. 9 illustrates a state machine engine, according to various embodiments of the invention. [0014] FIG. 10 illustrates an illustrates a second example of a row of the block of FIG. 3, according to various embodiments of the invention. [0015] FIG. 1 1 illustrates the intra-block switch of FIG. 10, according to various embodiments of the invention. [0016] FIG. 12 illustrates a first buffer of FIG. 3, according to various embodiments of the invention. [0017] FIG. 13 illustrates a second buffer of FIG. 3, according to various embodiments of the invention.DETAILED DESCRIPTION [0018] Turning now to the figures, FIG. 1 illustrates an embodiment of a processor-based system, generally designated by reference numeral 10. The system 10 may be any of a variety of types such as a desktop computer, laptop computer, pager, cellular phone, personal organizer, portable audio player, control circuit, camera, etc. The system 10 may also be a network node, such as a router, a server, or a client (e.g., one of the previously-described types of computers). The system 10 may be some other sort of electronic device, such as a copier, a scanner, a printer, a game console, a television, a set-top video distribution or recording system, a cable box, a personal digital media player, a factory automation system, an automotive computer system, or a medical device. (The terms used to describe these various examples of systems, like many of the other terms used herein, may share some referents and, as such, should not be construed narrowly in virtue of the other items listed.) [0019] In a typical processor-based device, such as the system 10, a processor 12, such as a microprocessor, controls the processing of system functions and requests in the system 10. Further, the processor 12 may comprise a plurality of processors that share system control. The processor 12 may be coupled directly or indirectly to each of the elements in the system 10, such that the processor 12 controls the system 10 by executing instructions that may be stored within the system 10 or external to the system 10. [0020] In accordance with the embodiments described herein, the system 10 includes a state machine engine 14, which may operate under control of the processor 12. The state machine engine 14 may employ any one of a number of state machine architectures, including, but not limited to Mealy architectures, Moore architectures, Finite State Machines (FSMs), Deterministic FSMs (DFSMs), Bit-Parallel State Machines (BPSMs), etc. Though a variety of architectures may be used, for discussion purposes, the application refers to FSMs. However, those skilled in the art will appreciate that the described techniques may be employed using any one of a variety of state machine architectures. [0021] As discussed further below, the state machine engine 14 may include a number of (e.g., one or more) finite state machine (FSM) lattices. Each FSM lattice may include multiple FSMs that each receive and analyze the same data in parallel. Further, the FSM lattices may bearranged in groups (e.g., clusters), such that clusters of FSM lattices may analyze the same input data in parallel. Further, clusters of FSM lattices of the state machine engine 14 may be arranged in a hierarchical structure wherein outputs from state machine lattices on a lower level of the hierarchical structure may be used as inputs to state machine lattices on a higher level. By cascading clusters of parallel FSM lattices of the state machine engine 14 in series through the hierarchical structure, increasingly complex patterns may be analyzed (e.g., evaluated, searched, etc.). [0022] Further, based on the hierarchical parallel configuration of the state machine engine 14, the state machine engine 14 can be employed for pattern recognition in systems that utilize high processing speeds. For instance, embodiments described herein may be incorporated in systems with processing speeds of 1 GByte/sec. Accordingly, utilizing the state machine engine 14, data from high speed memory devices or other external devices may be rapidly analyzed for various patterns. The state machine engine 14 may analyze a data stream according to several criteria, and their respective search terms, at about the same time, e.g., during a single device cycle. Each of the FSM lattices within a cluster of FSMs on a level of the state machine engine 14 may each receive the same search term from the data stream at about the same time, and each of the parallel FSM lattices may determine whether the term advances the state machine engine 14 to the next state in the processing criterion. The state machine engine 14 may analyze terms according to a relatively large number of criteria, e.g., more than 100, more than 1 10, or more than 10,000. Because they operate in parallel, they may apply the criteria to a data stream having a relatively high bandwidth, e.g., a data stream of greater than or generally equal to 1 GByte/sec, without slowing the data stream. [0023] In one embodiment, the state machine engine 14 may be configured to recognize (e.g., detect) a great number of patterns in a data stream. For instance, the state machine engine 14 may be utilized to detect a pattern in one or more of a variety of types of data streams that a user or other entity might wish to analyze. For example, the state machine engine 14 may be configured to analyze a stream of data received over a network, such as packets received over the Internet or voice or data received over a cellular network. In one example, the state machine engine 14 may be configured to analyze a data stream for spam or malware. The data stream may be received as a serial data stream, in which the data is received in an order that hasmeaning, such as in a temporally, lexically, or semantically significant order. Alternatively, the data stream may be received in parallel or out of order and, then, converted into a serial data stream, e.g., by reordering packets received over the Internet. In some embodiments, the data stream may present terms serially, but the bits expressing each of the terms may be received in parallel. The data stream may be received from a source external to the system 10, or may be formed by interrogating a memory device, such as the memory 16, and forming the data stream from data stored in the memory 16. In other examples, the state machine engine 14 may be configured to recognize a sequence of characters that spell a certain word, a sequence of genetic base pairs that specify a gene, a sequence of bits in a picture or video file that form a portion of an image, a sequence of bits in an executable file that form a part of a program, or a sequence of bits in an audio file that form a part of a song or a spoken phrase. The stream of data to be analyzed may include multiple bits of data in a binary format or other formats, e.g., base ten, ASCII, etc. The stream may encode the data with a single digit or multiple digits, e.g., several binary digits. [0024] As will be appreciated, the system 10 may include memory 16. The memory 16 may include volatile memory, such as Dynamic Random Access Memory (DRAM), Static Random Access Memory (SRAM), Synchronous DRAM (SDRAM), Double Data Rate DRAM (DDR SDRAM), DDR2 SDRAM, DDR3 SDRAM, etc. The memory 16 may also include nonvolatile memory, such as read-only memory (ROM), PC-RAM, silicon-oxide-nitride-oxide- silicon (SONOS) memory, metal-oxide-nitride-oxide-silicon (MONOS) memory, polysilicon floating gate based memory, and/or other types of flash memory of various architectures (e.g., NAND memory, NOR memory, etc.) to be used in conjunction with the volatile memory. The memory 16 may include one or more memory devices, such as DRAM devices, that may provide data to be analyzed by the state machine engine 14. Such devices may be referred to as or include solid state drives (SSD's), MultimediaMediaCards (MMC's), SecureDigital (SD) cards, CompactFlash (CF) cards, or any other suitable device. Further, it should be appreciated that such devices may couple to the system 10 via any suitable interface, such as Universal Serial Bus (USB), Peripheral Component Interconnect (PCI), PCI Express (PCI-E), Small Computer System Interface (SCSI), IEEE 1394 (Firewire), or any other suitable interface. To facilitate operation of the memory 16, such as the flash memory devices, the system 10 may include a memory controller (not illustrated). As will be appreciated, the memory controller may be anindependent device or it may be integral with the processor 12. Additionally, the system 10 may include an external storage 18, such as a magnetic storage device. The external storage may also provide input data to the state machine engine 14. [0025] The system 10 may include a number of additional elements. For instance, a compiler 20 may be used to program the state machine engine 14, as described in more detail with regard to FIG. 8. An input device 22 may also be coupled to the processor 12 to allow a user to input data into the system 10. For instance, an input device 22 may be used to input data into the memory 16 for later analysis by the state machine engine 14. The input device 22 may include buttons, switching elements, a keyboard, a light pen, a stylus, a mouse, and/or a voice recognition system, for instance. An output device 24, such as a display may also be coupled to the processor 12. The display 24 may include an LCD, a CRT, LEDs, and/or an audio display, for example. They system may also include a network interface device 26, such as a Network Interface Card (NIC), for interfacing with a network, such as the Internet. As will be appreciated, the system 10 may include many other components, depending on the application of the system 10. [0026] FIGs. 2-5 illustrate an example of a FSM lattice 30. In an example, the FSM lattice 30 comprises an array of blocks 32. As will be described, each block 32 may include a plurality of selectively couple-able hardware elements (e.g., programmable elements and/or special purpose elements) that correspond to a plurality of states in a FSM. Similar to a state in a FSM, a hardware element can analyze an input stream and activate a downstream hardware element, based on the input stream. [0027] The programmable elements can be programmed to implement many different functions. For instance, the programmable elements may include state machine elements (SMEs) 34, 36 (shown in FIG. 5) that are hierarchically organized into rows 38 (shown in FIGs. 3 and 4) and blocks 32 (shown in FIGs. 2 and 3). To route signals between the hierarchically organized SMEs 34, 36, a hierarchy of programmable switching elements can be used, including inter-block switching elements 40 (shown in FIGs. 2 and 3), intra-block switching elements 42 (shown in FIGs. 3 and 4) and intra-row switching elements 44 (shown in FIG. 4).[0028] As described below, the switching elements may include routing structures and buffers. A SME 34, 36 can correspond to a state of a FSM implemented by the FSM lattice 30. The SMEs 34, 36 can be coupled together by using the programmable switching elements as described below. Accordingly, a FSM can be implemented on the FSM lattice 30 by programming the SMEs 34, 36 to correspond to the functions of states and by selectively coupling together the SMEs 34, 36 to correspond to the transitions between states in the FSM. [0029] FIG. 2 illustrates an overall view of an example of a FSM lattice 30. The FSM lattice 30 includes a plurality of blocks 32 that can be selectively coupled together with programmable inter-block switching elements 40. The inter-block switching elements 40 may include conductors 46 (e.g., wires, traces, etc.) and buffers 48 and 50. In an example, buffers 48 and 50 are included to control the connection and timing of signals to/from the inter-block switching elements 40. As described further below, the buffers 48 may be provided to buffer data being sent between blocks 32, while the buffers 50 may be provided to buffer data being sent between inter-block switching elements 40. Additionally, the blocks 32 can be selectively coupled to an input block 52 (e.g., a data input port) for receiving signals (e.g., data) and providing the data to the blocks 32. The blocks 32 can also be selectively coupled to an output block 54 (e.g., an output port) for providing signals from the blocks 32 to an external device (e.g., another FSM lattice 30). The FSM lattice 30 can also include a programming interface 56 to load a program (e.g., an image) onto the FSM lattice 30. The image can program (e.g., set) the state of the SMEs 34, 36. That is, the image can configure the SMEs 34, 36 to react in a certain way to a given input at the input block 52. For example, a SME 34, 36 can be set to output a high signal when the character 'a' is received at the input block 52. [0030] In an example, the input block 52, the output block 54, and/or the programming interface 56 can be implemented as registers such that writing to or reading from the registers provides data to or from the respective elements. Accordingly, bits from the image stored in the registers corresponding to the programming interface 56 can be loaded on the SMEs 34, 36. Although FIG. 2 illustrates a certain number of conductors (e.g., wire, trace) between a block 32, input block 52, output block 54, and an inter-block switching element 40, it should be understood that in other examples, fewer or more conductors may be used.[0031] FIG. 3 illustrates an example of a block 32. A block 32 can include a plurality of rows 38 that can be selectively coupled together with programmable intra-block switching elements 42. Additionally, a row 38 can be selectively coupled to another row 38 within another block 32 with the inter-block switching elements 40. A row 38 includes a plurality of SMEs 34, 36 organized into pairs of elements that are referred to herein as groups of two (GOTs) 60. In an example, a block 32 comprises sixteen (16) rows 38. [0032] FIG. 4 illustrates an example of a row 38. A GOT 60 can be selectively coupled to other GOTs 60 and any other elements (e.g., a special purpose element 58) within the row 38 by programmable intra-row switching elements 44. A GOT 60 can also be coupled to other GOTs 60 in other rows 38 with the intra-block switching element 42, or other GOTs 60 in other blocks 32 with an inter-block switching element 40. In an example, a GOT 60 has a first and second input 62, 64, and an output 66. The first input 62 is coupled to a first SME 34 of the GOT 60 and the second input 62 is coupled to a second SME 34 of the GOT 60, as will be further illustrated with reference to FIG. 5. [0033] In an example, the row 38 includes a first and second plurality of row interconnection conductors 68, 70. In an example, an input 62, 64 of a GOT 60 can be coupled to one or more row interconnection conductors 68, 70, and an output 66 can be coupled to one row interconnection conductor 68, 70. In an example, a first plurality of the row interconnection conductors 68 can be coupled to each SME 34, 36 of each GOT 60 within the row 38. A second plurality of the row interconnection conductors 70 can be coupled to only one SME 34, 36 of each GOT 60 within the row 38, but cannot be coupled to the other SME 34,36 of the GOT 60. In an example, a first half of the second plurality of row interconnection conductors 70 can couple to first half of the SMEs 34, 36 within a row 38 (one SME 34 from each GOT 60) and a second half of the second plurality of row interconnection conductors 70 can couple to a second half of the SMEs 34,36 within a row 38 (the other SME 34,36 from each GOT 60), as will be better illustrated with respect to FIG. 5. The limited connectivity between the second plurality of row interconnection conductors 70 and the SMEs 34, 36 is referred to herein as "parity". In an example, the row 38 can also include a special purpose element 58 such as a counter, a programmable Boolean logic element, look-up table, RAM, a field programmable gate array(FPGA), an application specific integrated circuit (ASIC), a programmable processor (e.g., a microprocessor), or other element for performing a special purpose function. [0034] In an example, the special purpose element 58 comprises a counter (also referred to herein as counter 58). In an example, the counter 58 comprises a 12-bit programmable down counter. The 12-bit programmable counter 58 has a counting input, a reset input, and zero-count output. The counting input, when asserted, decrements the value of the counter 58 by one. The reset input, when asserted, causes the counter 58 to load an initial value from an associated register. For the 12-bit counter 58, up to a 12-bit number can be loaded in as the initial value. When the value of the counter 58 is decremented to zero (0), the zero-count output is asserted. The counter 58 also has at least two modes, pulse and hold. When the counter 58 is set to pulse mode, the zero-count output is asserted during the clock cycle when the counter 58 decrements to zero, and at the next clock cycle the zero-count output is no longer asserted. When the counter 58 is set to hold mode the zero-count output is asserted during the clock cycle when the counter 58 decrements to zero, and stays asserted until the counter 58 is reset by the reset input being asserted. [0035] In another example, the special purpose element 58 comprises Boolean logic. In some examples, this Boolean logic can be used to extract information from terminal state SMEs (corresponding to terminal nodes of a FSM, as discussed later herein) in FSM lattice 30. The information extracted can be used to transfer state information to other FSM lattices 30 and/or to transfer programming information used to reprogram FSM lattice 30, or to reprogram another FSM lattice 30. [0036] FIG. 5 illustrates an example of a GOT 60. The GOT 60 includes a first SME 34 and a second SME 36 having inputs 62, 64 and having their outputs 72, 74 coupled to an OR gate 76 and a 3-to- l multiplexer 78. The 3-to- l multiplexer 78 can be set to couple the output 66 of the GOT 60 to either the first SME 34, the second SME 36, or the OR gate 76. The OR gate 76 can be used to couple together both outputs 72, 74 to form the common output 66 of the GOT 60. In an example, the first and second SME 34, 36 exhibit parity, as discussed above, where the input 62 of the first SME 34 can be coupled to some of the row interconnect conductors 68 and the input 64 of the second SME 36 can be coupled to other row interconnect conductors 70. Inan example, the two SMEs 34, 36 within a GOT 60 can be cascaded and/or looped back to themselves by setting either or both of switching elements 79. The SMEs 34, 36 can be cascaded by coupling the output 72, 74 of the SMEs 34, 36 to the input 62, 64 of the other SME 34, 36. The SMEs 34, 36 can be looped back to themselves by coupling the output 72, 74 to their own input 62, 64. Accordingly, the output 72 of the first SME 34 can be coupled to neither, one, or both of the input 62 of the first SME 34 and the input 64 of the second SME 36. [0037] In an example, a state machine element 34, 36 comprises a plurality of memory cells 80, such as those often used in dynamic random access memory (DRAM), coupled in parallel to a detect line 82. One such memory cell 80 comprises a memory cell that can be set to a data state, such as one that corresponds to either a high or a low value (e.g., a 1 or 0). The output of the memory cell 80 is coupled to the detect line 82 and the input to the memory cell 80 receives signals based on data on the data stream line 84. In an example, an input on the data stream line 84 is decoded to select one of the memory cells 80. The selected memory cell 80 provides its stored data state as an output onto the detect line 82. For example, the data received at the input block 52 can be provided to a decoder (not shown) and the decoder can select one of the data stream lines 84. In an example, the decoder can convert an 8-bit ACSII character to the corresponding 1 of 256 data stream lines 84. [0038] A memory cell 80, therefore, outputs a high signal to the detect line 82 when the memory cell 80 is set to a high value and the data on the data stream line 84 corresponds to the memory cell 80. When the data on the data stream line 84 corresponds to the memory cell 80 and the memory cell 80 is set to a low value, the memory cell 80 outputs a low signal to the detect line 82. The outputs from the memory cells 80 on the detect line 82 are sensed by a detection cell 86. [0039] In an example, the signal on an input line 62, 64 sets the respective detection cell 86 to either an active or inactive state. When set to the inactive state, the detection cell 86 outputs a low signal on the respective output 72, 74 regardless of the signal on the respective detect line 82. When set to an active state, the detection cell 86 outputs a high signal on the respective output line 72, 74 when a high signal is detected from one of the memory cells 82 of the respective SME 34, 36. When in the active state, the detection cell 86 outputs a low signalon the respective output line 72, 74 when the signals from all of the memory cells 82 of the respective SME 34, 36 are low. [0040] In an example, an SME 34, 36 includes 256 memory cells 80 and each memory cell 80 is coupled to a different data stream line 84. Thus, an SME 34, 36 can be programmed to output a high signal when a selected one or more of the data stream lines 84 have a high signal thereon. For example, the SME 34 can have a first memory cell 80 (e.g., bit 0) set high and all other memory cells 80 (e.g., bits 1 -255) set low. When the respective detection cell 86 is in the active state, the SME 34 outputs a high signal on the output 72 when the data stream line 84 corresponding to bit 0 has a high signal thereon. In other examples, the SME 34 can be set to output a high signal when one of multiple data stream lines 84 have a high signal thereon by setting the appropriate memory cells 80 to a high value. [0041] In an example, a memory cell 80 can be set to a high or low value by reading bits from an associated register. Accordingly, the SMEs 34 can be programmed by storing an image created by the compiler 20 into the registers and loading the bits in the registers into associated memory cells 80. In an example, the image created by the compiler 20 includes a binary image of high and low (e.g., 1 and 0) bits. The image can program the FSM lattice 30 to operate as a FSM by cascading the SMEs 34, 36. For example, a first SME 34 can be set to an active state by setting the detection cell 86 to the active state. The first SME 34 can be set to output a high signal when the data stream line 84 corresponding to bit 0 has a high signal thereon. The second SME 36 can be initially set to an inactive state, but can be set to, when active, output a high signal when the data stream line 84 corresponding to bit 1 has a high signal thereon. The first SME 34 and the second SME 36 can be cascaded by setting the output 72 of the first SME 34 to couple to the input 64 of the second SME 36. Thus, when a high signal is sensed on the data stream line 84 corresponding to bit 0, the first SME 34 outputs a high signal on the output 72 and sets the detection cell 86 of the second SME 36 to an active state. When a high signal is sensed on the data stream line 84 corresponding to bit 1 , the second SME 36 outputs a high signal on the output 74 to activate another SME 36 or for output from the FSM lattice 30. [0042] In an example, a single FSM lattice 30 is implemented on a single physical device, however, in other examples two or more FSM lattices 30 can be implemented on a singlephysical device (e.g., physical chip). In an example, each FSM lattice 30 can include a distinct data input block 52, a distinct output block 54, a distinct programming interface 56, and a distinct set of programmable elements. Moreover, each set of programmable elements can react (e.g., output a high or low signal) to data at their corresponding data input block 52. For example, a first set of programmable elements corresponding to a first FSM lattice 30 can react to the data at a first data input block 52 corresponding to the first FSM lattice 30. A second set of programmable elements corresponding to a second FSM lattice 30 can react to a second data input block 52 corresponding to the second FSM lattice 30. Accordingly, each FSM lattice 30 includes a set of programmable elements, wherein different sets of programmable elements can react to different input data. Similarly, each FSM lattice 30, and each corresponding set of programmable elements can provide a distinct output. In some examples, an output block 54 from a first FSM lattice 30 can be coupled to an input block 52 of a second FSM lattice 30, such that input data for the second FSM lattice 30 can include the output data from the first FSM lattice 30 in a hierarchical arrangement of a series of FSM lattices 30. [0043] In an example, an image for loading onto the FSM lattice 30 comprises a plurality of bits of information for configuring the programmable elements, the programmable switching elements, and the special purpose elements within the FSM lattice 30. In an example, the image can be loaded onto the FSM lattice 30 to program the FSM lattice 30 to provide a desired output based on certain inputs. The output block 54 can provide outputs from the FSM lattice 30 based on the reaction of the programmable elements to data at the data input block 52. An output from the output block 54 can include a single bit indicating a match of a given pattern, a word comprising a plurality of bits indicating matches and non-matches to a plurality of patterns, and a state vector corresponding to the state of all or certain programmable elements at a given moment. As described, a number of FSM lattices 30 may be included in a state machine engine, such as state machine engine 14, to perform data analysis, such as pattern-recognition (e.g., speech recognition, image recognition, etc.) signal processing, imaging, computer vision, cryptography, and others. [0044] FIG. 6 illustrates an example model of a finite state machine (FSM) that can be implemented by the FSM lattice 30. The FSM lattice 30 can be configured (e.g., programmed) as a physical implementation of a FSM. A FSM can be represented as a diagram 90, (e..g,directed graph, undirected graph, pseudograph), which contains one or more root nodes 92. In addition to the root nodes 92, the FSM can be made up of several standard nodes 94 and terminal nodes 96 that are connected to the root nodes 92 and other standard nodes 94 through one or more edges 98. A node 92, 94, 96 corresponds to a state in the FSM. The edges 98 correspond to the transitions between the states. [0045] Each of the nodes 92, 94, 96 can be in either an active or an inactive state. When in the inactive state, a node 92, 94, 96 does not react (e.g., respond) to input data. When in an active state, a node 92, 94, 96 can react to input data. An upstream node 92, 94 can react to the input data by activating a node 94, 96 that is downstream from the node when the input data matches criteria specified by an edge 98 between the upstream node 92, 94 and the downstream node 94, 96. For example, a first node 94 that specifies the character 'b' will activate a second node 94 connected to the first node 94 by an edge 98 when the first node 94 is active and the character 'b' is received as input data. As used herein, "upstream" refers to a relationship between one or more nodes, where a first node that is upstream of one or more other nodes (or upstream of itself in the case of a loop or feedback configuration) refers to the situation in which the first node can activate the one or more other nodes (or can activate itself in the case of a loop). Similarly, "downstream" refers to a relationship where a first node that is downstream of one or more other nodes (or downstream of itself in the case of a loop) can be activated by the one or more other nodes (or can be activated by itself in the case of a loop). Accordingly, the terms "upstream" and "downstream" are used herein to refer to relationships between one or more nodes, but these terms do not preclude the use of loops or other non-linear paths among the nodes. [0046] In the diagram 90, the root node 92 can be initially activated and can activate downstream nodes 94 when the input data matches an edge 98 from the root node 92. Nodes 94 can activate nodes 96 when the input data matches an edge 98 from the node 94. Nodes 94, 96 throughout the diagram 90 can be activated in this manner as the input data is received. A terminal node 96 corresponds to a match of a sequence of interest by the input data. Accordingly, activation of a terminal node 96 indicates that a sequence of interest has been received as the input data. In the context of the FSM lattice 30 implementing a pattern recognition function,arriving at a terminal node 96 can indicate that a specific pattern of interest has been detected in the input data. [0047] In an example, each root node 92, standard node 94, and terminal node 96 can correspond to a programmable element in the FSM lattice 30. Each edge 98 can correspond to connections between the programmable elements. Thus, a standard node 94 that transitions to (e.g., has an edge 98 connecting to) another standard node 94 or a terminal node 96 corresponds to a programmable element that transitions to (e.g., provides an output to) another programmable element. In some examples, the root node 92 does not have a corresponding programmable element. [0048] When the FSM lattice 30 is programmed, each of the programmable elements can also be in either an active or inactive state. A given programmable element, when inactive, does not react to the input data at a corresponding data input block 52. An active programmable element can react to the input data at the data input block 52, and can activate a downstream programmable element when the input data matches the setting of the programmable element. When a programmable element corresponds to a terminal node 96, the programmable element can be coupled to the output block 54 to provide an indication of a match to an external device. [0049] An image loaded onto the FSM lattice 30 via the programming interface 56 can configure the programmable elements and special purpose elements, as well as the connections between the programmable elements and special purpose elements, such that a desired FSM is implemented through the sequential activation of nodes based on reactions to the data at the data input block 52. In an example, a programmable element remains active for a single data cycle (e.g., a single character, a set of characters, a single clock cycle) and then becomes inactive unless re-activated by an upstream programmable element. [0050] A terminal node 96 can be considered to store a compressed history of past events. For example, the one or more patterns of input data required to reach a terminal node 96 can be represented by the activation of that terminal node 96. In an example, the output provided by a terminal node 96 is binary, that is, the output indicates whether the pattern of interest has been matched or not. The ratio of terminal nodes 96 to standard nodes 94 in a diagram 90 may bequite small. In other words, although there may be a high complexity in the FSM, the output of the FSM may be small by comparison. [0051] In an example, the output of the FSM lattice 30 can comprise a state vector. The state vector comprises the state (e.g., activated or not activated) of programmable elements of the FSM lattice 30. In an example, the state vector includes the states for the programmable elements corresponding to terminal nodes 96. Thus, the output can include a collection of the indications provided by all terminal nodes 96 of a diagram 90. The state vector can be represented as a word, where the binary indication provided by each terminal node 96 comprises one bit of the word. This encoding of the terminal nodes 96 can provide an effective indication of the detection state (e.g., whether and what sequences of interest have been detected) for the FSM lattice 30. In another example, the state vector can include the state of all or a subset of the programmable elements whether or not the programmable elements corresponds to a terminal node 96. [0052] As mentioned above, the FSM lattice 30 can be programmed to implement a pattern recognition function. For example, the FSM lattice 30 can be configured to recognize one or more data sequences (e.g., signatures, patterns) in the input data. When a data sequence of interest is recognized by the FSM lattice 30, an indication of that recognition can be provided at the output block 54. In an example, the pattern recognition can recognize a string of symbols (e.g., ASCII characters) to; for example, identify malware or other information in network data. [0053] FIG. 7 illustrates an example of hierarchical structure 100, wherein two levels of FSM lattices 30 are coupled in series and used to analyze data. Specifically, in the illustrated embodiment, the hierarchical structure 100 includes a first FSM lattice 30A and a second FSM lattice 30B arranged in series. Each FSM lattice 30 includes a respective data input block 52 to receive data input, a programming interface block 56 to receive programming signals and an output block 54. [0054] The first FSM lattice 30A is configured to receive input data, for example, raw data at a data input block. The first FSM lattice 30A reacts to the input data as described above and provides an output at an output block. The output from the first FSM lattice 30A is sent to a data input block of the second FSM lattice 30B. The second FSM lattice 30B can then reactbased on the output provided by the first FSM lattice 30A and provide a corresponding output signal 102 of the hierarchical structure 100. This hierarchical coupling of two FSM lattices 30A and 30B in series provides a means to transfer information regarding past events in a compressed word from a first FSM lattice 30A to a second FSM lattice 30B. The information transferred can effectively be a summary of complex events (e.g., sequences of interest) that were recorded by the first FSM lattice 30A. [0055] The two-level hierarchy 100 of FSM lattices 30A, 30B shown in FIG. 7 allows two independent programs to operate based on the same data stream. The two-stage hierarchy can be similar to visual recognition in a biological brain which is modeled as different regions. Under this model, the regions are effectively different pattern recognition engines, each performing a similar computational function (pattern matching) but using different programs (signatures). By connecting multiple FSM lattices 30A, 30B together, increased knowledge about the data stream input may be obtained. [0056] The first level of the hierarchy (implemented by the first FSM lattice 30A) can, for example, perform processing directly on a raw data stream. That is, a raw data stream can be received at an input block 52 of the first FSM lattice 30A and the programmable elements of the first FSM lattice 30A can react to the raw data stream. The second level (implemented by the second FSM lattice 30B) of the hierarchy can process the output from the first level. That is, the second FSM lattice 30B receives the output from an output block 54 of the first FSM lattice 30A at an input block 52 of the second FSM lattice 30B and the programmable elements of the second FSM lattice 30B can react to the output of the first FSM lattice 30A. Accordingly, in this example, the second FSM lattice 30B does not receive the raw data stream as an input, but rather receives the indications of patterns of interest that are matched by the raw data stream as determined by the first FSM lattice 30A. The second FSM lattice 30B can implement a FSM that recognizes patterns in the output data stream from the first FSM lattice 30A. [0057] FIG. 8 illustrates an example of a method 1 10 for a compiler to convert source code into an image configured to program a FSM lattice, such as lattice 30, to implement a FSM. Method 1 10 includes parsing the source code into a syntax tree (block 1 12), converting the syntax tree into an automaton (block 1 14), optimizing the automaton (block 1 16), converting theautomaton into a netlist (block 1 18), placing the netlist on hardware (block 120), routing the netlist (block 122), and publishing the resulting image (block 124). [0058] In an example, the compiler 20 includes an application programming interface (API) that allows software developers to create images for implementing FSMs on the FSM lattice 30. The compiler 20 provides methods to convert an input set of regular expressions in the source code into an image that is configured to program the FSM lattice 30. The compiler 20 can be implemented by instructions for a computer having a von Neumann architecture. These instructions can cause a processor 12 on the computer to implement the functions of the compiler 20. For example, the instructions, when executed by the processor 12, can cause the processor 12 to perform actions as described in blocks 1 12, 1 14, 1 16, 1 18, 120, 122, and 124 on source code that is accessible to the processor 12. [0059] In an example, the source code describes search strings for identifying patterns of symbols within a group of symbols. To describe the search strings, the source code can include a plurality of regular expressions (regexs). A regex can be a string for describing a symbol search pattern. Regexes are widely used in various computer domains, such as programming languages, text editors, network security, and others. In an example, the regular expressions supported by the compiler include criteria for the analysis of unstructured data. Unstructured data can include data that is free form and has no indexing applied to words within the data. Words can include any combination of bytes, printable and non-printable, within the data. In an example, the compiler can support multiple different source code languages for implementing regexes including Perl, (e.g., Perl compatible regular expressions (PCRE)), PHP, Java, and .NET languages. [0060] At block 1 12 the compiler 20 can parse the source code to form an arrangement of relationally connected operators, where different types of operators correspond to different functions implemented by the source code (e.g., different functions implemented by regexes in the source code). Parsing source code can create a generic representation of the source code. In an example, the generic representation comprises an encoded representation of the regexs in the source code in the form of a tree graph known as a syntax tree. The examples described hereinrefer to the arrangement as a syntax tree (also known as an "abstract syntax tree") in other examples, however, a concrete syntax tree or other arrangement can be used. [0061] Since, as mentioned above, the compiler 20 can support multiple languages of source code, parsing converts the source code, regardless of the language, into a non-language specific representation, e.g., a syntax tree. Thus, further processing (blocks 1 14, 1 16, 1 18, 120) by the compiler 20 can work from a common input structure regardless of the language of the source code. [0062] As noted above, the syntax tree includes a plurality of operators that are relationally connected. A syntax tree can include multiple different types of operators. That is, different operators can correspond to different functions implemented by the regexes in the source code. [0063] At block 1 14, the syntax tree is converted into an automaton. An automaton comprises a software model of a FSM and can accordingly be classified as deterministic or non- deterministic. A deterministic automaton has a single path of execution at a given time, while a non-deterministic automaton has multiple concurrent paths of execution. The automaton comprises a plurality of states. In order to convert the syntax tree into an automaton, the operators and relationships between the operators in the syntax tree are converted into states with transitions between the states. In an example, the automaton can be converted based partly on the hardware of the FSM lattice 30. [0064] In an example, input symbols for the automaton include the symbols of the alphabet, the numerals 0-9, and other printable characters. In an example, the input symbols are represented by the byte values 0 through 255 inclusive. In an example, an automaton can be represented as a directed graph where the nodes of the graph correspond to the set of states. In an example, a transition from state p to state q on an input symbol a, i.e. <¾?,«), is shown by a directed connection from node p to node q. In an example, a reversal of an automaton produces a new automaton where each transition p→q on some symbol a is reversed q→p on the same symbol. In a reversal, start state becomes a final state and the final states become start states. In an example, the language recognized (e.g., matched) by an automaton is the set of all possible character strings which when input sequentially into the automaton will reach a final state. Eachstring in the language recognized by the automaton traces a path from the start state to one or more final states. [0065] At block 1 16, after the automaton is constructed, the automaton is optimized to, among other things, reduce its complexity and size. The automaton can be optimized by combining redundant states. [0066] At block 1 18, the optimized automaton is converted into a netlist. Converting the automaton into a netlist maps each state of the automaton to a hardware element (e.g., SMEs 34, 36, other elements) on the FSM lattice 30, and determines the connections between the hardware elements. [0067] At block 120, the netlist is placed to select a specific hardware element of the target device (e.g., SMEs 34, 36, special purpose elements 58) corresponding to each node of the netlist. In an example, placing selects each specific hardware element based on general input and output constraints for of the FSM lattice 30. [0068] At block 122, the placed netlist is routed to determine the settings for the programmable switching elements (e.g., inter-block switching elements 40, intra-block switching elements 42, and intra-row switching elements 44) in order to couple the selected hardware elements together to achieve the connections describe by the netlist. In an example, the settings for the programmable switching elements are determined by determining specific conductors of the FSM lattice 30 that will be used to connect the selected hardware elements, and the settings for the programmable switching elements. Routing can take into account more specific limitations of the connections between the hardware elements that placement at block 120. Accordingly, routing may adjust the location of some of the hardware elements as determined by the global placement in order to make appropriate connections given the actual limitations of the conductors on the FSM lattice 30. [0069] Once the netlist is placed and routed, the placed and routed netlist can be converted into a plurality of bits for programming of a FSM lattice 30. The plurality of bits are referred to herein as an image.[0070] At block 124, an image is published by the compiler 20. The image comprises a plurality of bits for programming specific hardware elements of the FSM lattice 30. In embodiments where the image comprises a plurality of bits (e.g., 0 and 1), the image can be referred to as a binary image. The bits can be loaded onto the FSM lattice 30 to program the state of SMEs 34, 36, the special purpose elements 58, and the programmable switching elements such that the programmed FSM lattice 30 implements a FSM having the functionality described by the source code. Placement (block 120) and routing (block 122) can map specific hardware elements at specific locations in the FSM lattice 30 to specific states in the automaton. Accordingly, the bits in the image can program the specific hardware elements to implement the desired function(s). In an example, the image can be published by saving the machine code to a computer readable medium. In another example, the image can be published by displaying the image on a display device. In still another example, the image can be published by sending the image to another device, such as a programming device for loading the image onto the FSM lattice 30. In yet another example, the image can be published by loading the image onto a FSM lattice (e.g., the FSM lattice 30). [0071] In an example, an image can be loaded onto the FSM lattice 30 by either directly loading the bit values from the image to the SMEs 34, 36 and other hardware elements or by loading the image into one or more registers and then writing the bit values from the registers to the SMEs 34, 36 and other hardware elements. In an example, the hardware elements (e.g., SMEs 34, 36, special purpose elements 58, programmable switching elements 40, 42, 44) of the FSM lattice 30 are memory mapped such that a programming device and/or computer can load the image onto the FSM lattice 30 by writing the image to one or more memory addresses. [0072] Method examples described herein can be machine or computer-implemented at least in part. Some examples can include a computer-readable medium or machine-readable medium encoded with instructions operable to configure an electronic device to perform methods as described in the above examples. An implementation of such methods can include code, such as microcode, assembly language code, a higher-level language code, or the like. Such code can include computer readable instructions for performing various methods. The code may form portions of computer program products. Further, the code may be tangibly stored on one or more volatile or non-volatile computer-readable media during execution or at other times. Thesecomputer-readable media may include, but are not limited to, hard disks, removable magnetic disks, removable optical disks (e.g., compact disks and digital video disks), magnetic cassettes, memory cards or sticks, random access memories (RAMs), read only memories (ROMs), and the like. [0073] Referring now to FIG. 9, an embodiment of the state machine engine 14 is illustrated. As previously described, the state machine engine 14 is configured to receive data from a source, such as the memory 16 over a data bus. In the illustrated embodiment, data may be sent to the state machine engine 14 through a bus interface, such as a DDR3 bus interface 130. The DDR3 bus interface 130 may be capable of exchanging data at a rate greater than or equal to 1 GByte/sec. As will be appreciated, depending on the source of the data to be analyzed, the bus interface 130 may be any suitable bus interface for exchanging data to and from a data source to the state machine engine 14, such as a NAND Flash interface, PCI interface, etc. As previously described, the state machine engine 14 includes one or more FSM lattices 30 configured to analyze data. Each FSM lattice 30 may be divided into two half- lattices. In the illustrated embodiment, each half lattice may include 24K SMEs (e.g., SMEs 34, 36), such that the lattice 30 includes 48K SMEs. The lattice 30 may comprise any desirable number of SMEs, arranged as previously described with regard to FIGS. 2-5. Further, while only one FSM lattice 30 is illustrated, the state machine engine 14 may include multiple FSM lattices 30, as previously described. [0074] Data to be analyzed may be received at the bus interface 130 and transmitted to the FSM lattice 30 through a number of buffers and buffer interfaces. In the illustrated embodiment, the data path includes data buffers 132, process buffers 134 and an inter-rank (IR) bus and process buffer interface 136. The data buffers 132 are configured to receive and temporarily store data to be analyzed. In one embodiment, there are two data buffers 132 (data buffer A and data buffer B). Data may be stored in one of the two data buffers 132, while data is being emptied from the other data buffer 132, for analysis by the FSM lattice 30. In the illustrated embodiment, the data buffers 132 may be 32 KBytes each. The IR bus and process buffer interface 136 may facilitate the transfer of data to the process buffer 134. The IR bus and process buffer 136 ensures that data is processed by the FSM lattice 30 in order. The IR bus and process buffer 136 may coordinate the exchange of data, timing information, packinginstructions, etc. such that data is received and analyzed in the correct order. Generally, the IR bus and process buffer 136 allows the analyzing of multiple data sets in parallel through logical ranks of FSM lattices 30. [0075] In the illustrated embodiment, the state machine engine 14 also includes a decompressor 138 and a compressor 140 to aid in the transfer of the large amounts of data through the state machine engine 14. The compressor 140 and de-compressor 138 work in conjunction such that data can be compressed to minimize the data transfer times. By compressing the data to be analyzed, the bus utilization time may be minimized. Based on information provided by the compiler 20, a mask may be provided to the state machine engine 14 to provide information on which state machines are likely to be unused. The compressor 140 and de-compressor 138 can also be configured to handle data of varying burst lengths. By padding compressed data and including an indicator as to when each compressed region ends, the compressor 140 may improve the overall processing speed through the state machine engine 14. The compressor 140 and de-compressor 138 may also be used to compress and decompress match results data after analysis by the FSM lattice 30. [0076] As previously described, the output of the FSM lattice 30 can comprise a state vector. The state vector comprises the state (e.g., activated or not activated) of programmable elements of the FSM lattice 30. Each state vector may be temporarily stored in the state vector cache memory 142 for further hierarchical processing and analysis. That is, the state of each state machine may be stored, such that the final state may be used in further analysis, while freeing the state machines for reprogramming and/or further analysis of a new data set. Like a typical cache, the state vector cache memory allows storage of information, here state vectors, for quick retrieval and use, here by the FSM lattice 30, for instance. Additional buffers, such as the state vector memory buffer, state vector intermediate input buffer 146 and state vector intermediate output buffer 148, may be utilized in conjunction with the state vector cache memory 142 to accommodate rapid analysis and storage of state vectors, while adhering to packet transmission protocol through the state machine engine 14. [0077] Once a result of interest is produced by the FSM lattice 30, match results may be stored in a match results memory 150. That is, a "match vector" indicating a match (e.g.,detection of a pattern of interest) may be stored in the match results memory 150. The match result can then be sent to a match buffer 152 for transmission over the bus interface 130 to the processor 12, for example. As previously described, the match results may be compressed. [0078] Additional registers and buffers may be provided in the state machine engine 14, as well. For instance, the state machine engine 14 may include control and status registers 154. In addition, restore and program buffers 156 may be provided for using in programming the FSM lattice 30 initially, or restoring the state of the machines in the FSM lattice 30 during analysis. Similarly, save and repair map buffers 158 may also be provided for storage of save and repair maps for setup and usage. [0079] FIG. 10 illustrates a second example of a row 38 similar to that discussed above with respect to FIG. 4. The row 38 may include programmable intra-row switching elements 44 and row interconnection/interconnect conductors 68, 70 (which can also be referred to as "row routing lines", as described below). [0080] Row 38 of FIG. 10 may include eight GOTS 60, a special purpose element 58, inputs 62, inputs 64, outputs 66, a match element 160, a plurality of row routing lines 162, 164, 166, 168, 170, 172, 174, 176, 178, 180, 182, 184, 186, 188, 190, and 192 (collectively referred to hereafter as "row routing lines 162-192"), a special purpose element routing line 194, and a plurality of junction points 196. [0081] Furthermore, in addition to being coupled to the illustrated SMEs 34, 36 in FIG. 1 1 , the local routing matrix 172 may be coupled to all pairs of SMEs 34, 36 for the GOTs 60 in a particular row 38. Accordingly, the local routing matrix 172 may include programmable intra- row switching elements 44 and row interconnection/interconnect conductors 68, 70 (which can also be referred to as "row routing lines", as described below). [0082] The GOTS 60 and the special purpose element 58 illustrated in FIG. 10 are substantially similar to the GOTS 60 and the special purpose element 58 previously discussed with respect to FIG. 4. Accordingly, each GOT 60 receives an input 62, which may be a unified enable input, to operate as an enable signal for a detection cell 86 of a SME 34. Likewise, each GOT 60 also receives an input 64, which may also be a unified enable input, to operate as anenable signal for a detection cell 86 of a SME 36. These unified enable inputs 62, 64 may activate the detection cells 86 of the SMEs 34, 36 to output a respective result of an analysis performed by the respective SME, for example, a match in an analyzed data stream from a single SME 34, which may be utilized in conjunction with results from other SMEs 34, 36 to, for example, search for a pattern in a data stream. For example, unified enable input 62 and unified enable input 64 allow for selective activation of the SMEs 34, 36 so that results generated by each of the active SMEs 34, 36 may be utilized as part of an overall broader analysis of a data stream. [0083] The result generated by an SME 34, 36 of a GOT 60 may be selectively provided from the GOT on output 66. In one embodiment, the possible outputs of the GOT 60 may include no output, an output of the first SME 34, i.e., output 72, an output of the second SME 36, i.e., output 74, or the output of the first SME 34 or the output of the second SME 36, i.e., output 72 or output 74. Thus, a GOT 60 may be programmed to output a selected result from a GOT 60. This programming may be accomplished, for example, based on a loaded image performed during an initial programming stage of the FSM lattice 30. Results from the GOTs 60 may be provided to a match element 160, which may operate to output a selected result generated from the row 38 for a given data stream search or a portion of a data stream search. [0084] Additionally, row 38 may include row routing lines 162-192 (which may also be referred to as row interconnection/interconnect conductors). In the present embodiment, there are sixteen row lines 162-192 that are selectively coupleable to eight GOTS 60 and to the special purpose element 58. However, it should be appreciated that fewer or more row routing lines may be utilized in conjunction with the row 38. [0085] Each of the row routing lines 162-192 may be utilized to provide enable signals for any of the SMEs 34, 36 of one or more GOTS 60 along inputs 62, 64. Accordingly, through use of these row routing lines 162-192, any particular detection cell 86 for any particular SME (e.g., SME 34) may be activated. This may be accomplished by selectively coupling (e.g., in accordance with a loaded image) the row routing lines 162-192 to unified enable inputs 62, 64 of the SMEs 34, 36. Moreover, to provide further flexibility in providing enable signals to the SMEs 34, 36, the row routing lines 162-192 may be divided up amongst two SMEs 34, 36 of agiven GOT 60. For example, row routing lines 162, 164, 166, 168, 170, 172, 174, and 176, may be utilized to activate any of the SMEs 34, 36 in the row 38. For example, a GOT 60 may transmit an output 66 to the row routing line coupled thereto, for example, row routing line 162. This signal may be transmitted into the intra-block switch, where it may be routed, for example, on row routing line 164 to an additional GOT 60 in the row 38. Additionally, row routing lines 178, 182, 186, and 190 may activate SMEs 34 in row 38, for example, by receiving signals from intra-block switch 42, while row routing lines 180, 184, 188, and 192 may activate SMEs 36 in row 38 via, for example, signals received from the intra-block witch 42. In this manner, the overall number of row routing lines 162-192 may be reduced, while still allowing for overall flexibility and the ability to activate any detection cell 86 of any of the SMEs 34, 36 in a row 38. [0086] As illustrated in FIG. 10, each of the row routing lines 162-192 includes a plurality of junction points 196. These junction points 196 may, for example, include the intra- row switching elements 44 of FIG. 3, since the junction points 196 may be utilized to selectively couple any GOT 60 to any other GOT 60, or any GOT 60 to any other element (e.g., a special purpose element 58) within the row 38 (or, for that matter, within another row and/or another block). However, these connections may be limited by available junction points 196. For example, each of row routing lines 162, 164, 166, 168, 170, 172, 174, and 176, may be utilized to activate any of the SMEs 34, 36 in the row 38. However, each of row routing lines 162, 164, 166, 168, 170, 172, 174, and 176 also are selectively coupleable to the output of a respective different one of the GOTs 60. For example, an output from any one of the GOTs 60 may only be provided from that GOT 60 on a respective one of the row routing lines 162, 164, 166, 168, 170, 172, 174, and 176 coupleable thereto. Thus, in one embodiment, because row routing lines 162, 164, 166, 168, 170, 172, 174, and 176 are coupleable to the outputs 66 of the GOTs 60, the row routing lines 162, 164, 166, 168, 170, 172, 174, and 176 may provide (e.g., drive-out) signals to the intra-block switch 42. In contrast, in one embodiment, row routing lines 178, 180, 182, 184, 186, 188, 190, and 192 may receive (e.g. be driven by) signals from the intra-block switch 42 that may be received from, for example, other rows 38 or blocks 32. [0087] In addition to row routing lines 162-192, the row 38 may include a special purpose element routing line 194 coupled to a special purpose element 58. Similar to row routing lines 162, 164, 166, 168, 170, 172, 174, and 176, the special purpose routing line 194may provide (e.g., drive-out) signals to the intra-block switch 42. In one embodiment, the special purpose element routing line 194 may also be coupleable to the match element 160. For example, if the special purpose element 58 comprises a counter, an output of the counter may be provided along the special purpose routing line 194. Similarly, if the special purpose element 58 includes a Boolean logic element, such as a Boolean cell, an output of the Boolean logic element may be provided along the special purpose routing line 194. Through the use of these special purpose elements, repetitive searches (e.g., find an element ten times) or cascaded searches (e.g., find elements x, y, and z) may be simplified into a single output that can be provided along the special purpose routing line 194 to either or both of the intra-block switch 42 and the match element 160. [0088] A more detailed illustration of the intra-block switch 42 and its operation is presented in FIG. 1 1. As illustrated, the intra-block switch 42 may receive the row routing lines 162-192 as well as the special purpose element routing line 194, and these lines may intersect various block routing lines 198, 200, 202, 204, 206, 208, 210, 212, 214, 216, 218, 220, 222, 224, 226, and 228 (collectively referred to hereafter as "block routing lines 198-228") at a plurality of junction points 230. These junction points 230 may, for example, may be utilized to selectively couple row routing lines 162-192 to block routing lines 198-228. In one embodiment, each of row routing lines 162, 164, 166, 168, 170, 172, 174, and 176 may provide (e.g., drive-out, send, transmit, transfer, pass, etc.) signals to the intra-block switch 42, while row routing lines 178, 180, 182, 184, 186, 188, 190, and 192 may receive (e.g., drive-in) signals from the intra-block switch 42. Accordingly, row routing lines 162, 164, 166, 168, 170, 172, 174, and 176 may be utilized to provide signals from the row 38 coupled to the intra-block switch 42 in FIG. 10 to adjacent rows 38, such as those illustrated in FIG. 3. Additionally or alternatively, row routing lines 162, 164, 166, 168, 170, 172, 174, and 176 may be utilized to provide signals from the row 38 coupled to the intra-block switch 42 in FIG. 10 to other rows 38 in the block 32 and/or to the block routing buffer 48. This may be accomplished by providing the signals generated in a given row 38 to one of the block routing lines 198-228 coupled thereto, since the block routing lines 198-228 are coupled to the various intra-block switches 42 and the block routing buffer 48 of FIG. 3. This may allow row 38 to provide any results generated therein to adjacent rows 38 or even other blocks 32.[0089] Conversely, in one embodiment, each of the row routing lines 178, 180, 182, 184, 186, 188, 190, and 192 may receive (e.g., drive-in) signals from the intra-block switch 42. Accordingly, row routing lines 178, 180, 182, 184, 186, 188, 190, and 192 may be utilized to provide signals to the row 38 coupled to the intra-block switch 42 in FIG. 10 from adjacent rows 38, such as those illustrated in FIG. 3. Additionally or alternatively, row routing lines 178, 180, 182, 184, 186, 188, 190, and 192 may be utilized to provide signals from the row 38 coupled to the intra-block switch 42 in FIG. 10 from the block routing buffer 48. This may be accomplished by receiving signals generated in external blocks 32 or in adjacent rows 38 from one of the block routing lines 198-228 of FIG. 1 1 , since the block routing lines 198-228 are coupled to the various intra-block switches 42 and the block routing buffer 48 of FIG. 3. This may allow row 38 to receive any results generated in adjacent rows 38 or even other blocks 32 along row routing lines 178, 180, 182, 184, 186, 188, 190, and 192. In this manner, the intra- block switch 42 may couple row 38 with adjacent rows 38 and other blocks 32. [0090] As discussed above, results from any particular row 38 may be passed from one block 32 to another block 32. The block routing buffer 48 may facilitate this transfer of information between blocks. FIG. 12 illustrates one embodiment of the block routing buffer 48. [0091] As illustrated in FIG. 12, block routing buffer 48 may include block routing ports 232, 234 as well as junction routing ports 236, 238. As illustrated, block routing port 232 may be coupled to a bi-directional path 240 so that signals may be provided to and received from a block 32 at block routing port 232. Additionally, block routing port 234 may be coupled to an output path 242, such that block routing port 234 may provide signals to a block 32. Thus, the block routing buffer 48 may provide one or two signals to a given block at a time (i.e., simultaneously) by utilizing either one or both of the block routing ports 232, 234. [0092] Junction routing ports 236, 238 of block routing buffer 48 may also allow for one or two signals to be provided at the same time (i.e., simultaneously). As illustrated, junction routing port 236 may be coupled to a bi-directional path 244 so that signals may be provided to and received from conductors 46 (e.g., wires, traces, etc.) in an inter-block switching element 40. Additionally, junction routing port 238 may be coupled to an output path 246, such that junction routing port 238 may provide signals to conductors 46 of an inter-block switching element 40.Thus, the block routing buffer 48 may provide one or two signals to a given set of conductors 46 at a time (i.e., simultaneously) by utilizing either one or both of the junction routing ports 236, 238. As may be appreciated, the signals provided to conductors 46 of an inter-block switching element 40 may be coupled to another block routing buffer 48 so that signals may be provided from a first block 32, through a first block routing buffer 48, across the conductors 46, to an adjacent block routing buffer 48, and to an adjacent block 32, as illustrated in FIG. 2. [0093] To accomplish this routing, the block routing buffer 48 may include a bidirectional drive element 248 and two uni-directional drive elements 250, 252. As illustrated, the bi-directional drive element 248 may provide and receive signals along the bi-directional paths 240 and 244, while the uni-directional drive elements 250, 252 provide signals along output paths 242, 246, respectively. [0094] Moreover, at any given time, the block routing buffer 48 may be functioning to receive signals, or provide one or more signals. Accordingly, block routing buffer 48 includes control inputs 254, 256, 258, and 260. Control inputs 254, 256, 258, and 260 may allow for the block routing buffer 48 to be programmably set to provide one or more signals or receive signals. For example, control input 254 may receive and provide a control signal to the bi-directional drive element 248 to activate the bi-directional drive element 248 to receive a signal from junction routing port 236 and provide the received signal from block routing port 232 to a block 32. Similarly, control input 256 may receive and provide a control signal to the bi-directional drive element 248 to activate the bi-directional drive element 248 to receive a signal from block routing port 232 and provide the signal to an inter-block switching element 40 via junction routing port 236. [0095] Additionally, control input 258 may receive and provide a control signal to the uni-directional drive element 250 to activate the uni-directional drive element 250 to receive a signal from junction routing port 236 and provide the received signal from block routing port 234 to a block 32. Similarly, control input 260 may receive and provide a control signal to the uni-directional drive element 252 to activate the uni-directional drive element 252 to receive a signal from block routing port 234 and provide the signal to an inter-block switching element 40 via junction routing port 238. Accordingly, the control inputs 254, 256, 258, and 260 may allowthe block routing buffer 48 to simultaneously provide signals to a block 32 or to inter-block switching element 40, thus at least doubling the overall speed of providing signals through the block routing buffer 48. [0096] Signals provided to the conductors 46 of the inter-block switching element 40 by a block routing buffer 48 may not always be immediately provided to, for example, another block routing buffer 48. As illustrated in FIG. 2, some signals may pass through an isolation buffer 50 prior to, for example, being provided to another block routing buffer 48 in an adjacent inter-block switching element 40. FIG. 13 illustrates an embodiment of an isolation buffer 50 that may be utilized to provide signals from one inter-block switching element 40 to another inter-block switching element 40. [0097] As illustrated in FIG. 13, the isolation buffer 50 includes a junction routing port 262 as well as an isolation routing port 264. As illustrated, junction routing port 262 may be coupled to a bi-directional path 266 so that signals may be provided to and received from conductors 46 of the inter-block switching element 40 at junction routing port 262. Additionally, isolation routing port 264 may be coupled to a bi-directional path 268 so that signals may be provided to and received from another isolation buffer 50 prior to being provided to conductors 46 of an adjacent inter-block switching element 40. As may be appreciated, the signals may be provided from conductors 46 of an inter-block switching element 40 to isolation buffer 50, to another isolation buffer 50 via isolation routing port 264, and to conductors 46 of an adjacent inter-block switching element, as illustrated in FIG. 2. [0098] To accomplish this routing, isolation buffer 50 may include two uni-directional drive elements 270, 272. As illustrated, the uni-directional drive elements 270, 272 provide signals along output paths 266, 268, respectively. Moreover, at any given time, the isolation buffer 50 may be functioning to receive signals or provide signals. Accordingly, isolation buffer 50 includes control inputs 274, 276. Control inputs 274, 276, may allow for the isolation buffer 50 to be programmably set to provide signals or receive signals. For example, control input 274 may receive and provide a control signal to the uni-directional drive element 270 to activate the uni-directional drive element 270 to receive a signal from junction routing port 262 and provide the received signal from isolation routing port 264 to an adjacent isolation buffer 50. Similarly,control input 276 may receive and provide a control signal to the uni-directional drive element 272 to activate the uni-directional drive element 272 to receive a signal from an adjacent isolation buffer 50 and provide the signal to an inter-block switching element 40 via junction routing port 262. Additionally, in at least one embodiment, the isolation buffer 50 may operate as an amplifier to amplify the signals provided from isolation routing port 264 to an adjacent isolation buffer 50 so as to prevent, for example, signal degradation as signals are provided between adjacent isolation buffers 50. [0099] While the invention may be susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and have been described in detail herein. However, it should be understood that the invention is not intended to be limited to the particular forms disclosed. Rather, the invention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the following appended claims. |
In described examples, an integrated circuit (100) includes an isolation capacitor (116), which includes a silicon dioxide dielectric layer (120) and a polymer dielectric layer (122). The polymer dielectric layer (122) is over the silicon dioxide dielectric layer (120) and extends across the integrated circuit (100). A bond pad (126) is on a top plate (124) of the isolation capacitor (116). Another bond pad (130) is outside the isolation capacitor (116) and is electrically coupled to an instance of metal interconnects (110) through a via (132) through the polymer dielectric layer (122). |
CLAIMSWhat is claimed is:1. An integrated circuit, comprising:a substrate including a semiconductor;a plurality of transistors disposed in the substrate;at least one metal level containing metal interconnects, the metal level being disposed over the substrate;an isolation capacitor including: a bottom plate; a silicon dioxide dielectric layer disposed over the bottom plate; a polymer dielectric layer disposed over the silicon dioxide dielectric layer and extending across the integrated circuit; and a top plate disposed over the polymer dielectric layer;a bond pad disposed on the top plate; andanother bond pad outside the isolation capacitor, the another bond pad being electrically coupled to an instance of the metal interconnects through a via disposed through the polymer dielectric layer.2. The integrated circuit of claim 1, wherein the bottom plate is a part of the metal level.3. The integrated circuit of claim 1, wherein the bottom plate includes a layer of adhesion metal containing titanium, and a layer of sputtered aluminum over the layer of adhesion metal.4. The integrated circuit of claim 1, wherein the top plate includes a metal seed layer and a layer of electroplated copper.5. The integrated circuit of claim 1, wherein the silicon dioxide dielectric layer extends across the integrated circuit.6. The integrated circuit of claim 1, wherein the silicon dioxide dielectric layer is localized to the isolation capacitor.7. The integrated circuit of claim 1, wherein a thickness of the silicon dioxide dielectric layer is 8 microns to 10 microns.8. The integrated circuit of claim 1, wherein the another bond pad is electrically coupled to the instance of the metal interconnects through a lower via disposed through the silicon dioxide dielectric layer.9. The integrated circuit of claim 1, wherein a thickness of the polymer dielectric layer is 9 microns to 12 microns.10. The integrated circuit of claim 1, wherein the polymer dielectric layer is formed of polyimide.11. The integrated circuit of claim 1 , wherein the polymer dielectric layer is formed of poly(p-phenylene-2,6-benzobisoxazole) (PBO).12. The integrated circuit of claim 1, wherein the polymer dielectric layer is formed of benzocyclobutene (BCB).13. A method of forming an integrated circuit, the method comprising:providing a substrate including a semiconductor;forming a plurality of transistors in the substrate;forming at least one metal level over the substrate, the metal level containing metal interconnects;forming a bottom plate of an isolation capacitor;forming a silicon dioxide dielectric layer of the isolation capacitor over the bottom plate;forming a polymer dielectric layer of the isolation capacitor over the silicon dioxide dielectric layer, the polymer dielectric layer extending across the integrated circuit;forming a via hole through the polymer dielectric layer;forming a top plate of the isolation capacitor over the polymer dielectric layer; forming a bond pad on the top plate; andforming another bond pad outside the isolation capacitor, the another bond pad being electrically coupled to an instance of the metal interconnects through a via in the via hole.14. The method of claim 13, wherein forming the bottom plate includes:forming an adhesion metal layer containing titanium;forming a sputtered aluminum layer on the adhesion metal layer;forming an etch mask over the sputtered aluminum layer, which covers an area for the bottom plate; andetching the sputtered aluminum layer and the adhesion metal layer in areas exposed by the etch mask.15. The method of claim 13, wherein forming the top plate includes:forming a metal seed layer over the polymer dielectric layer;forming a plating mask over the metal seed layer to expose an area for the top plate;electroplating copper on the metal seed layer in the area for the top plate; and removing the plating mask.16. The method of claim 13, wherein the polymer dielectric layer is formed of polyimide.17. The method of claim 13, wherein the polymer dielectric layer is formed of PBO.18. The method of claim 13, wherein the silicon dioxide dielectric layer extends across the integrated circuit, the via hole is an upper via hole, and the method further comprises forming a lower via hole through the silicon dioxide dielectric layer outside an area for the isolation capacitor under the upper via hole, so that the another bond pad is electrically coupled to the instance of the metal interconnects through a lower via in the lower via hole.19. The method of claim 13, further comprising patterning the silicon dioxide dielectric layer to be localized to the isolation capacitor.20. The method of claim 13, wherein forming the silicon dioxide dielectric layer includes repeated formation of sublayers of silicon dioxide using a PECVD process with TEOS, which produces a stress level less than 40 megapascals for a 600 nanometer thick sublayer. |
HIGH VOLTAGE HYBRID POLYMERIC-CERAMIC DIELECTRIC CAPACITOR[0001] This relates in general to integrated circuits, and in particular to high voltage capacitors in integrated circuits.BACKGROUND[0002] An integrated circuit may receive input signals that have direct current (DC) bias levels. Such levels may be several hundred volts above the operating voltage for the integrated circuit. Accordingly, isolation components may exist between the input signals and components (such as transistors) in the integrated circuit. It may be desirable for the isolation component to provide transient protection and surge protection of several thousand volts, while achieving long term reliability. It may further be desirable to integrate the isolation component into the integrated circuit, but meeting the protection and reliability goals while attaining a desired fabrication cost of the integrated circuit is challenging.SUMMARY[0003] In described examples, an integrated circuit includes an isolation capacitor, which includes a silicon dioxide dielectric layer and a polymer dielectric layer. The polymer dielectric layer is over the silicon dioxide dielectric layer and extends across the integrated circuit. A bond pad is on a top plate of the isolation capacitor. Another bond pad is outside the isolation capacitor and is electrically coupled to an instance of metal interconnects through a via through the polymer dielectric layer.BRIEF DESCRIPTION OF THE DRAWINGS[0004] FIG. 1 is a cross-sectional view of an example integrated circuit containing an isolation capacitor.[0005] FIG. 2 is a cross-sectional view of the integrated circuit of FIG. 1, with an alternate configuration of the silicon dioxide dielectric layer.[0006] FIGS. 3A-3J are cross-sectional views of another example integrated circuit containing an isolation capacitor, depicted in successive stages of fabrication.DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS[0007] The following co-pending patent application is hereby incorporated by reference: Application No. US 13/960,406. [0008] Referring to FIG. 1, an integrated circuit 100 is formed in and on a semiconductor substrate 102 and includes active components 104, which are shown in FIG. 1 as transistors 104. The active components 104 may be laterally isolated by field oxide 106. The integrated circuit 100 further includes at least one level of metal interconnects. In this example, the integrated circuit 100 includes interconnects in a first metal level 108, and interconnects in a second metal level 110, which are vertically connected by vias 112 and connected to the active components 104 through contacts 114. The metal interconnects in the first metal level 108 and the second metal level 110 may include, for example, etched aluminum or damascene copper.[0009] The integrated circuit 100 includes at least one isolation capacitor 116. A bottom plate 118 of the isolation capacitor 116 may, for example, be a part of the second metal level 110 as shown in FIG. 1. The isolation capacitor 116 includes a silicon dioxide dielectric layer 120, which extends across the integrated circuit 100. A thickness of the silicon dioxide dielectric layer 120 is selected to provide long term reliability for the isolation capacitor 116. For example, an instance of the isolation capacitor 116 providing isolation up to 7000 volts DC may have the silicon dioxide dielectric layer 120 with a thickness of 9 microns.[0010] The isolation capacitor 116 includes a polymer dielectric layer 122 over the silicon dioxide dielectric layer 120. The polymer dielectric layer 122 also extends across the integrated circuit 100. The polymer dielectric layer 122 may be, for example, polyimide that has been treated to remove residual moisture, poly(p-phenylene-2, 6-benzobisoxazole) (PBO), benzocyclobutene (BCB), or a parylene polymer such as parylene C or parylene D. A thickness of the polymer dielectric layer 122 is selected to provide surge and transient protection for the isolation capacitor 116. For example, an instance of the isolation capacitor 116 providing protection from a voltage surge up to 10,000 volts and up to 5000 alternating current (AC) root-mean-square (rms) volts may have a thickness of 10 microns.[0011] The isolation capacitor 116 includes a top plate 124 over the polymer dielectric layer 122. The top plate 124 is at least 5 microns thick. The top plate 124 may include, for example, etched aluminum or electroplated copper. A bond pad 126 is disposed on the top plate 124. [0012] The integrated circuit 100 may include top level interconnect elements 128 over the polymer dielectric layer 122, which support bond pads 130 for low voltage signals or supply voltages. The top level interconnect elements 128 may be coupled to the active components 104 through vias 132 through the polymer dielectric layer 122 and the silicon dioxide dielectric layer 120.[0013] A layer of protective overcoat 134 is disposed over the top plate 124 and the polymer dielectric layer 122, with openings for the bond pad 126 on the top plate 124 and the bond pads 130 for the low voltage signals and supply voltages. The bond pad 126 and the bond pads 130 may be wire bond pads that support wire bonds 136 as shown in FIG. 1, or alternatively may be bump bond bonds that support bump bonds.[0014] During operation of the integrated circuit 100, input signals applied to the bond pad 126 are transmitted to at least one instance of the active components 104 through the isolation capacitor 116. A capacitance of the isolation capacitor 116 may be, for example, 50 to 250 femtofarads. Forming the isolation capacitor 116 to include the polymer dielectric layer 122 over the silicon dioxide dielectric layer 120 may advantageously provide long term reliability and protection from voltage surges and transients.[0015] Referring to FIG. 2, the silicon dioxide dielectric layer 120 is patterned to be localized to the isolation capacitor 116, so that top level interconnect elements 128 may be coupled to the active components 104 through vias 132 only through the polymer dielectric layer 122. Forming the silicon dioxide dielectric layer 120 to be localized to the isolation capacitor 116 eliminates vias through the silicon dioxide dielectric layer 120, and thereby may advantageously reduce fabrication cost and complexity of the integrated circuit 100.[0016] Referring to FIG. 3A, the integrated circuit 300 is formed in and on a substrate 302, which includes semiconductor material. The substrate 302 may be, for example, a single crystal silicon wafer, a silicon-on-insulator (SOI) wafer, a hybrid orientation technology (HOT) wafer with regions of different crystal orientations, or other material appropriate for fabrication of the integrated circuit 300.[0017] Elements of field oxide 306 may be formed at a top surface of the substrate 302 to laterally isolated components of the integrated circuit 300. The field oxide 306 may be formed, for example, using a local oxidation of silicon (LOCOS) process or a shallow trench isolation (STI) process. Active components 304, such as metal oxide semiconductor (MOS) transistors 304 as shown in FIG. 3A, are formed in and on the substrate 302.[0018] A pre-metal dielectric (PMD) layer 338 is formed over the active components 304 and the substrate 302. The PMD layer 338 may be, for example, a dielectric layer stack including a silicon nitride or silicon dioxide PMD liner 10 to 100 nanometers thick deposited by plasma enhanced chemical vapor deposition (PECVD), a layer of silicon dioxide, phospho-silicate glass (PSG) or boro-phospho-silicate glass (BPSG), commonly 100 to 1000 nanometers thick deposited by PECVD, commonly leveled by a chemical-mechanical polish (CMP) process, and an optional PMD cap layer, commonly 10 to 100 nanometers of a hard material such as silicon nitride, silicon carbide nitride or silicon carbide.[0019] Contacts 314 are formed through the PMD layer 338 to make electrical connections to the active components 304. The contacts 314 may be formed, for example, by etching contact holes through the PMD layer 338 to expose the substrate 302 using a reactive ion etch (RIE) process, forming a liner of titanium and titanium nitride using a sputter process and an atomic layer deposition (ALD) process respectively, forming a tungsten layer on the liner using a CVD process to fill the contact holes, and removing the tungsten and liner from a top surface of the PMD layer 338 using etchback and/or chemical mechanical polish (CMP) processes.[0020] Metal interconnects in a first metal level 308 are formed over the PMD layer 338, making electrical connections to the contacts 314. The metal interconnects in the first metal level 308 may be formed using an aluminum metallization process, by forming a layer of adhesion metal, such as titanium tungsten or titanium nitride, on the contacts and the PMD layer, forming a layer of sputtered aluminum, such as aluminum with a few percent titanium, copper and/or silicon, on the layer of adhesion metal, and possibly forming an optional layer of antireflection metal, such as titanium nitride, on the layer of sputtered aluminum. An etch mask is formed over the layer of antireflection metal to cover areas for the metal interconnects. The etch mask may include photoresist formed by a photolithographic process, or may include inorganic hard mask materials. An RIE process removes the layer of antireflection metal, the layer of sputtered aluminum and the layer of adhesion metal exposed by the etch mask, leaving the metal interconnects as shown in FIG. 3A.[0021] Alternatively, the metal interconnects in the first metal level 308 may be formed using a copper damascene process by forming a first intra-metal dielectric (IMD) layer over the PMD layer 338, and etching trenches in the IMD layer, commonly between 50 and 150 nanometers deep. A layer of liner metal such as tantalum nitride is formed on a bottom and sides of the trenches, commonly by physical vapor deposition, atomic layer deposition or chemical vapor deposition. A seed layer of copper is formed on the liner metal, commonly by sputtering. The trenches are subsequently filled with copper, commonly by electroplating. Copper and liner metal are removed from a top surface of the IMD layer by CMP and etch processes, leaving the copper and liner metal in the trenches.[0022] An inter-level dielectric (ILD) layer 340 is formed over the metal interconnects in the first metal level 308. The ILD layer 340 may include, for example, silicon dioxide formed by a plasma enhanced chemical vapor deposition (PECVD) process using tetraethyl orthosilicate, also known as tetraethoxysilane or TEOS.[0023] Vias 312 are formed through the ILD layer 340 to make electrical connections to the metal interconnects in the first metal level 308. The vias 312 may be formed, for example, by etching via holes through the ILD layer 340 to expose the metal interconnects in the first metal level 308 using an RIE process, forming a liner of titanium and/or titanium nitride, forming a tungsten layer on the liner using a CVD process to fill the via holes, and removing the tungsten and liner from a top surface of the ILD layer 340 using etchback and/or CMP processes.[0024] A layer of interconnect metal 342 is formed over the ILD layer 340. The layer of interconnect metal 342 may include, for example, an adhesion metal layer of 10 to 50 nanometers of titanium tungsten or titanium, a sputtered aluminum layer 200 to 1000 nanometers thick on the adhesion metal layer, and possibly an optional antireflection metal layer of titanium nitride 20 to 50 nanometers thick on the sputtered aluminum layer.[0025] An interconnect etch mask 344 is formed over the layer of interconnect metal 342 to cover areas for metal interconnects in a second metal level. The interconnect etch mask 344 may include photoresist formed by a photolithographic process.[0026] Referring to FIG. 3B, an interconnect metal etch process removes metal from the layer of interconnect metal 342 of FIG. 3A in areas exposed by the interconnect etch mask 344 to leave metal interconnects of a second metal level 310 and a bottom plate 318 of the isolation capacitor 316. The interconnect metal etch process may include an RIE process using chlorine, or may include a wet etch using an aqueous mixture of phosphoric acid, acetic acid and nitric acid, commonly referred to as aluminum leach etch. The interconnect etch mask 344 is removed after the interconnect metal etch process is completed.[0027] Referring to FIG. 3C, an IMD layer 346 is formed over the ILD layer 340 between the metal interconnects of the second metal level 310 and the bottom plate 318. The IMD layer 346 may include, for example, silicon dioxide formed by thermal decomposition of methylsilsesquioxane (MSQ).[0028] A silicon dioxide dielectric layer 320 is formed over the second metal level 310 and the bottom plate 318, which extends across the integrated circuit 300. The silicon dioxide dielectric layer 320 may be formed, for example, by repeated formation of sublayers of silicon dioxide using a PECVD process with TEOS, which produces a stress level less than 40 megapascals for a 600 nanometer thick sublayer. A thickness of the silicon dioxide dielectric layer 320 may be, for example 8 microns to 10 microns. Forming the silicon dioxide dielectric layer 320 to extend across the integrated circuit 300 may provide process margin for subsequently formed features, and thereby desirably reduce a fabrication cost of the integrated circuit 300.[0029] A via etch mask 348 is formed over the silicon dioxide dielectric layer 320 to expose an area for a via to the metal interconnects of the second metal level 310. The via etch mask 348 may include photoresist formed by a photolithographic process, or may include a hard mask material, such as silicon nitride or silicon carbide formed by a mask and etch process.[0030] Referring to FIG. 3D, a via etch process removes silicon dioxide from the silicon dioxide dielectric layer 320 in the area exposed by the via etch mask 348 to form a lower via hole 350. The via etch process may include an RIE process using fluorine radicals. The via etch mask 348 is removed after the via etch process is completed, such as using an asher process.[0031] Referring to FIG. 3E, a polymer dielectric layer 322 is formed over the silicon dioxide dielectric layer 320, which extends across the integrated circuit 300. The polymer dielectric layer 322 may be formed of, for example, polyimide, PBO, BCB or parylene. A thickness of the polymer dielectric layer 322 may be, for example, 9 microns to 12 microns. Forming the polymer dielectric layer 322 to extend across the integrated circuit 300 may provide process margin for subsequently formed features of the integrated circuit 300, and thereby desirably reduce the fabrication cost.[0032] An upper via hole 352 is formed through the polymer dielectric layer 322 over the lower via hole 350. If the polymer dielectric layer 322 is formed of a photosensitive material, such as photosensitive polyimide, the upper via hole 352 may be formed directly using a photolithographic process of exposure and develop. If the polymer dielectric layer 322 is formed of a non-photosensitive material, such as non-photosensitive polyimide, the upper via hole 352 may be formed by a mask and etch process. The polymer dielectric layer 322 is formed to remove residual moisture. For example, an instance of the polymer dielectric layer 322 including polyimide may be baked at 150 °C for 48 hours to remove residual moisture.[0033] Referring to FIG. 3F, a metal seed layer 354 is formed over the polymer dielectric layer 322, extending into the upper via hole 352 and the lower via hole 350, and contacting a metal interconnect of the second metal level 310. The metal seed layer 354 may include, for example, an adhesion layer of 10 to 50 nanometers of titanium tungsten and a plating layer of 50 to 200 nanometers of sputtered copper.[0034] A plating mask 356 is formed over the metal seed layer 354 to expose areas for a subsequently formed thick copper level. The plating mask 356 may include photoresist and may be 20 percent to 80 percent thicker than the subsequently formed thick copper level.[0035] Referring to FIG. 3G, a copper electroplating process forms an electroplated copper layer 358 on the metal seed layer 354 in areas exposed by the plating mask 356. The electroplated copper layer 358 extends into the upper via hole 352 and the lower via hole 350. The electroplated copper layer 358 may be, for example, 5 microns to 10 microns thick.[0036] Referring to FIG. 3H, the plating mask 356 of FIG. 3G is removed, such as by dissolving polymer materials of the plating mask 356 in an appropriate solvent, such as acetone or N-methylpyrrolidinone, commonly referred to as NMP. A bond pad plating mask 360 is formed over the electroplated copper layer 358 and the polymer dielectric layer 322, exposing areas on the electroplated copper layer 358 for under-bump metal for bond pads. An electroplating operation forms plated bond pads 362 on the electroplated copper layer 358 including the top plate 324 of the isolation capacitor 316, in the areas exposed by the bond pad plating mask 360. The bond pads 362 may include layers of nickel, palladium and gold. Forming the bond pad 362 on the top plate 324 simplifies a structure of the integrated circuit 300 and thereby reduces the fabrication cost. The bond pad plating mask 360 is subsequently removed, such as by dissolution in acetone or NMP.[0037] Referring to FIG. 31, the metal seed layer 354 is removed in areas that are not covered by the electroplated copper layer 358, such as using an aqueous solution of nitric acid and hydrogen peroxide or an aqueous solution of ammonium hydroxide and hydrogen peroxide. The electroplated copper layer 358 combined with the metal seed layer 354 in the area for the isolation capacitor 316 provide a top plate 324 of the isolation capacitor 316.[0038] Referring to FIG. 3J, a layer of protective overcoat 334 is formed over an existing top surface of the integrated circuit 300 with openings over the bond pads 362. The layer of protective overcoat 334 may be, for example, polyimide or PBO, formed by a photolithographic process. In this example, the bond pads 362 are bump bond pads 362. Bump bonds 364 are formed on the bond pads 362. The electroplated copper layer 358 and the metal seed layer 354 in the upper via hole 352 and the lower via hole 350 provide an electrical coupling between the bump bond 364 and the metal interconnects of the second metal level 310. Alternatively, the bond pads 362 may be wire bond pads. The integrated circuit 300 may be encapsulated or sealed in a package to reduce moisture uptake in the polymer dielectric layer 322.[0039] Modifications are possible in the described embodiments, and other embodiments are possible, within the scope of the claims. |
A scatter/gather technique optimizes unstructured streaming memory accesses, providing off-chip bandwidth efficiency by accessing only useful data at a fine granularity, and off-loading memory access overhead by supporting address calculation, data shuffling, and format conversion. |
1.A device includes:processor;A cache coupled to the processor to transfer data between the cache and off-chip memory using a cache line size transfer; andA scatter / collection engine accessible by the processor, the scatter / collection engine being able to generate data accesses to the sub-cache line size of the off-chip memory in order to directly read from / to the off-chip memory / Writes sub-cache line size data for use by the processor.2.The apparatus according to claim 1, wherein the dispersion / collection engine further comprises:The access processor is capable of calculating a storage address for the memory access and performing a data format conversion.3.The apparatus according to claim 1, wherein the dispersion / collection engine further comprises:A stream port coupled to the processor, the stream port including a buffer capable of storing the access processor and ordered data accessible by the processor.4.The apparatus according to claim 1, wherein the dispersion / collection engine further comprises:A cache interface is coupled to the cache, and when the same data is accessed through the cache and the scatter / collection engine, the cache interface combines the cache to provide data coherence.5.The apparatus according to claim 1, further comprising:A memory controller is coupled to the scatter / collection engine and the off-chip memory, and the memory controller supports access to cache line and sub-cache line sizes of the off-chip memory.6.The apparatus according to claim 2, wherein the access processor further comprises:An access pattern generator is used to generate a memory access according to a program-defined pattern.7.The apparatus according to claim 2, wherein the access processor further comprises:An access pattern generator is used to generate a memory access based on a stride-based access pattern.8.The apparatus according to claim 2, wherein the access processor further comprises:An access pattern generator is used to generate a memory access according to an indirect access pattern.9.A method including:Transfer cache line size data between the cache and off-chip memory; andA data access to the sub-cache line size of the off-chip memory is generated by the scatter / collection engine so as to directly read / write sub-cache line size data from / to the off-chip memory for use by the processor .10.The method of claim 9, wherein data coherence is enhanced when accessing the same data through the cache and the scatter / collection engine.11.The method of claim 10, wherein data coherence is enhanced by mutual exclusion of data in a buffer in the scatter / collection engine or in the cache.12.The method of claim 10, wherein the data coherence is enhanced by address range checking in a directory.13.The method of claim 9, wherein generating further comprises:Calculating a storage address for the memory access; andPerform data format conversion.14.The method according to claim 9, further comprising:Assigning flow ports in the scatter / collection engine; andData is accessed through the assigned stream port.15.The method according to claim 9, further comprising:Assigning stream ports in the scatter / collection engine to threads in the processor;In response to a thread context switch, after the write data stored in the stream port has been written to the memory, the stream port is released.16.The method of claim 9, wherein generating further comprises:A storage address is calculated for the memory access according to a program-defined mode.17.A product that includes machine-accessible media with associated information,Wherein, when the information is accessed, the machine executes:Transfer cache line size data between the cache and off-chip memory; andA data access to the sub-cache line size of the off-chip memory is generated by the scatter / collection engine so as to directly read / write sub-cache line size data from / to the off-chip memory for use by the processor .18.The product of claim 17, wherein generating further comprises:Calculating a storage address for the memory access; andPerform data format conversion.19.The product of claim 17, further comprising:Allocating stream ports in the scatter / gather engine to process sub-cache line size data; andAccess to the memory is directed through the assigned stream port.20.The product of claim 17, further comprising:Assigning stream ports in the scatter / collection engine to threads in the processor; andIn response to a thread context switch, after the write data stored in the stream port has been written to the memory, the stream port is released.21.The product of claim 18, wherein the calculation further comprises:A memory access address is generated according to a step-based pattern.22.The product of claim 18, wherein the calculation further comprises:The memory access address is generated according to the indirect mode.23.A system including:Dynamic random access memory (DRAM);processor;A cache coupled to the processor to transfer data between the cache and DRAM using a cache line size transfer; andA scatter / collection engine accessible by the processor, the scatter / collection engine being able to generate data accesses to sub-cache line sizes of the DRAM in order to directly read / write sub-high speeds from / to the DRAM Cache line size data for use by the processor.24.The system of claim 23, wherein data coherence is enhanced when accessing the same data through the cache and the scatter / collection engine.25.The system of claim 23, further comprising:A memory controller is coupled to the cache interface and the DRAM, and the memory controller supports access to cache line and sub cache line sizes of the DRAM. |
Decentralized-collection intelligent memory architecture on multiprocessor systemsTechnical fieldThe present disclosure relates to a microprocessor system, and more particularly, to a memory architecture in a microprocessor system.Background techniqueThe latency of access to main memory (external memory) lags behind the increase in processor speed, resulting in performance bottlenecks. To reduce access latency, many processors include on-chip caches that locally save large contiguous blocks of data (cache lines) fetched from main memory based on space and time. Spatial locality is the notion that the probability of referencing data is higher when data near it is just referenced. Temporal locality is the possibility that data referenced at a point in time may be referenced again sometime in the near future.Although many applications have data access patterns that present temporal and spatial locality, there are also application classes that have data access patterns that do not exhibit temporal and spatial locality. For example, some multimedia applications, databases, and signal processing applications do not exhibit a high degree of locality in time and space. In addition, some stride and indirect access modes used in many data-intensive applications do not exhibit a high degree of locality in time and space.Off-chip communication in traditional cache architectures is inefficient because data management is determined by the size of the cache line. If the data access pattern does not exhibit spatial locality, then only a small portion of the cache line is actually used, and memory bandwidth used to access other parts of the cache line is wasted. In addition, because the data buffer is also based on the entire cache line, the efficiency of the cache is low, which causes more cache misses and more off-chip communications.In addition, traditional processor architectures do not take advantage of the parallelism of memory access. In order to prepare the operands of the calculation, that is, the value operated by the instruction, the processor may cause large overhead, for example, address calculation and data format conversion in addition to the actual memory access. Although pure memory latency is a cause of performance bottlenecks, memory access overhead also increases access latency.Summary of the inventionAccording to an aspect of the present invention, there is provided an apparatus including: a processor; a cache coupled to the processor, transferring data between the cache and off-chip memory using a cache line size transfer; and And a scatter / collection engine accessible by the processor, the scatter / collection engine being able to generate data accesses to sub-cache line sizes of the off-chip memory to read / write directly to / from the off-chip memory The data of the sub-cache line size is used by the processor.In one embodiment of the apparatus of the present invention, the decentralization / collection engine further includes: an access processor capable of calculating a storage address for the memory access and performing data format conversion.In one embodiment of the apparatus of the present invention, the decentralization / collection engine further includes: a stream port coupled to the processor, the stream port including capable of storing the access processor and the processor accessible Ordered data buffer.In an embodiment of the apparatus of the present invention, the scatter / collection engine further includes: a cache interface coupled to the cache, and when the same data is accessed through the cache and the scatter / collection engine, The cache interface combines the cache to provide data coherence.In one embodiment of the apparatus of the present invention, further comprising: a memory controller coupled to the scatter / collection engine and the off-chip memory, the memory controller supporting a cache line of the off-chip memory and Sub-cache line size access.In an embodiment of the apparatus of the present invention, the access processor further includes: an access pattern generator for generating a memory access according to a program-defined pattern.In an embodiment of the apparatus of the present invention, the access processor further includes: an access pattern generator for generating a memory access according to a stride-based access pattern.In an embodiment of the apparatus of the present invention, the access processor further includes: an access pattern generator for generating a memory access according to an indirect access pattern.According to another aspect of the present invention, there is provided a method including: transferring data of a cache line size between a cache and off-chip memory; and generating a sub-cache line for the off-chip memory by a scatter / collection engine Size data access to read / write sub-cache line size data directly from / to the off-chip memory for use by the processor.In one embodiment of the method of the present invention, data coherence is enhanced when accessing the same data through the cache and the scatter / collection engine.In one embodiment of the method of the present invention, data coherence is enhanced via mutual exclusion of data in a buffer in the scatter / collection engine or in the cache.In one embodiment of the method of the invention, said data coherence is enhanced by address range checks in the directory.In one embodiment of the method of the present invention, the generating further comprises: calculating a storage address for the memory access; and performing a data format conversion.In one embodiment of the method of the present invention, further comprising: allocating a stream port in the decentralization / collection engine; and accessing data through the assigned stream port.In an embodiment of the method of the present invention, the method further comprises: allocating a stream port in the scatter / collection engine to a thread in the processor; and responding to a thread context switch, writing data stored in the stream port After the memory has been written, the stream port is released.In an embodiment of the method of the present invention, the generating further comprises: calculating a storage address for the memory access according to a program-defined mode.According to yet another aspect of the present invention, there is provided a product comprising a machine-accessible medium having associated information, wherein the information, when accessed, causes the machine to perform: transferring a cache between a cache and off-chip memory Row size data; and a data access to the sub-cache line size of the off-chip memory generated by the scatter / collection engine to directly read / write sub-cache line size data from / to the off-chip memory Data for use by the processor.In one embodiment of the product of the invention, generating further comprises: calculating a storage address for the memory access; and performing data format conversion.In one embodiment of the product of the present invention, it further comprises: allocating stream ports in the scatter / collection engine to process sub-cache line size data; and directing access to memory through the assigned stream ports .In one embodiment of the product of the present invention, it further comprises: allocating a stream port in the scatter / collection engine to a thread in the processor; and in response to a thread context switch, a write stored in the stream port After data has been written to the memory, the stream port is released.In one embodiment of the product of the present invention, the calculating further comprises generating a memory access address according to a step-based pattern.In one embodiment of the product of the invention, the calculation further comprises: generating a memory access address according to an indirect mode.According to yet another aspect of the present invention, a system is provided, including: a dynamic random access memory (DRAM); a processor; a cache, coupled to the processor, using a cache line size transfer between the cache and Transferring data between DRAMs; and a scatter / collection engine accessible to the processor, the scatter / collection engine being able to generate data accesses to sub-cache line sizes of the DRAM to directly transfer data from / to the DRAM Read / write sub-cache line size data for use by the processor.In one embodiment of the system of the present invention, when the same data is accessed through the cache and the scatter / collection engine, data coherence is enhanced.In an embodiment of the system of the present invention, further comprising: a memory controller coupled to the cache interface and the DRAM, the memory controller supporting a cache line and a sub cache line size of the DRAM Access.BRIEF DESCRIPTION OF THE DRAWINGSBy reading the following detailed description and referring to the accompanying drawings, the features of the embodiments of the claimed subject matter will become very obvious. Similar reference numerals in the drawings indicate similar components, and in the drawings:1 is a block diagram of an embodiment of a multi-core processor for processing unstructured streaming data according to the principles of the present invention;2 is a block diagram illustrating a plurality of stream ports providing a communication mechanism between a computing processor and an access processor in the multi-core processor shown in FIG. 1;3 is a flowchart of an embodiment of a method for managing and accessing any of the flow ports shown in FIG. 2;4 and 5 are block diagrams of embodiments of a scatter / gather engine including an access processor with a programmable engine;6 and 7 are block diagrams of an embodiment of an access pattern generator that can be included in the access processor shown in FIG. 1; andFIG. 8 is a block diagram of one embodiment of a storage system supporting cache line size data transfer and sub cache line size data transfer.Although the following detailed description will be made with reference to illustrative embodiments of the claimed subject matter, those skilled in the art will be very aware of many alternatives, modifications, and variations. It is therefore the intention that the subject matter of the claimed benefit be viewed broadly and defined only as set forth in the appended claims.detailed descriptionThe system according to one embodiment of the present invention captures irregular data access patterns in order to optimize memory latency and bandwidth. The system also reduces the instruction overhead associated with memory accesses, including address calculations and data format conversion. In one embodiment, the fast narrow multi-channel memory controller saves off-chip bandwidth by supporting efficient scatter / collect operations.Although caches are generally efficient at capturing normal memory access patterns, they cannot capture random access patterns. One embodiment of a storage system in accordance with the principles of the present invention includes a traditional cache and a scatter / gather engine that cooperate to capture two types of memory access patterns. In addition, for random access patterns, memory access overhead can be shuffled to the scatter / gather engine to speed up calculations. This independent scatter / gather engine can also start fetching data from memory before the compute processor requests it, thus efficiently prefetching the data. Data coherence is enhanced if the same data is accessed by a cache and a scatter / collection engine.FIG. 1 is a block diagram of one embodiment of a multi-core processor 100 for processing unstructured streaming data according to the principles of the present invention. The multi-core processor 100 has a plurality of cores 102, 102N, where each core 102 includes a processor ("computation processor") 104 for performing data calculations. Each core 102 also includes a scatter / gather engine component integrated with a traditional cache hierarchy. In one embodiment, the scatter / gather engine components include a cache interface 106, an access processor 110, and a stream port 112. In one embodiment, each core 102 has a cache hierarchy consisting of a single level cache (“L1 cache”) 108.Memory bandwidth savings are important in multi-core processors where a large number of cores share a common memory interface with limited bandwidth. By allowing data to be accessed in an unstructured access mode, the scatter / collection engine 150 in combination with the memory controller 116 reduces the off-chip memory bandwidth usage of the main memory 118. For example, a data access might be a sub-cache line size data transfer. In addition to reducing bandwidth usage, one embodiment of the scatter / gather engine is fully programmable, has hardware coherence, can hide memory access latency, and can overlap memory access overhead with calculations.The multi-core processor 100 may include one or more levels of cache shared among the cores 102, 102N. In one embodiment, the cores 102, 102N share a single level cache ("L2 cache") 114.The multi-core processor 100 further includes a multi-channel memory controller 116. The multi-channel memory controller 116 supports cache line size data transfers, i.e. large sequential access to / from the cache, and off-chip (off-chip, on-board, external or Random) small-granularity data transfer of the main) memory 118. The main memory 118 may be Rambus dynamic random access memory (RDRAM), double data rate dynamic random access memory (DDR RAM), synchronous dynamic random access memory (SDRAM), or any similar type of memory.The stream port 112 includes a data buffer, an interface to the computing processor 104, an interface to the access processor 110, and an interface to the cache interface 106. The data buffer in the stream port 112 provides a communication mechanism between the computing processor 104 and the access processor 110.The access processor 110 is coupled to the stream port 112 and the cache interface 106 and generates a storage address according to the access mode. The access processor 110 may be a programmable engine or hard-wired logic. Hard-wired logic supports a limited class of access modes, while programmable engines have the flexibility to adapt to any access mode.The cache interface 106 is coupled to the stream port 112, the access processor 110, and the storage controller 116, and provides data coherence between the caches 108, 114 and the stream port 112. The cache interface 106 also provides an interface to the multi-channel memory controller 116.Each computing processor 104 has two memory access methods: one is through a cache hierarchy (L1 cache (dedicated cache) 108 to level 2 (L2) cache (shared cache) 114 to main memory 118 ), And the other is through the scatter / collection engine 150 (stream port 112, access processor 110, and cache interface 106) to the main memory 118. The multi-channel memory controller 116 provides an interface to the main memory 118 to the cache and stream ports 112.To avoid wasting memory bandwidth, the scatter / gather engine transfers and buffers only the required data size (referred to as sub-cache line size data access) instead of the entire cache line according to the access pattern. In addition, memory access overhead and latency are offloaded by separating memory access from data calculations, where the access processor 110 prepares operands while the calculation processor 104 performs calculations.To perform a function that computes an operand, the compute processor 104 allocates a stream port 112 and initializes the access processor 110. The stream port 112 provides a communication mechanism between the computing processor 104 and the access processor 110. For a read operation from the memory 118, the access processor 110 collects data from the memory 118 and provides the data stream to the computing processor 104. For a write operation to the memory 118, the computing processor 104 writes a data stream, and the access processor 110 distributes the data to the memory 118. In one embodiment, data is placed into the stream port 112 in a first-in-first-out (FIFO) order.According to one embodiment of the invention, the scatter / collection technique performed by the scatter / collection engine is an application-specific optimization of a data-intensive application that does not exhibit spatial or temporal locality. Rather than using caches to capture spatial and temporal locality, the scatter / gather engine uses pattern locality. Mode locality is an application-defined memory access mode. The application clearly defines the access pattern and passes it to the scatter / gather engine that utilizes the access pattern. Each stream port 112 and access processor 110 includes an internal register that stores information required to perform a given scatter / gather operation. This method is more proactive than caching because it explicitly passes data access patterns rather than relying on the spatial or temporal locality of the data. Therefore, the scatter / gather technique is an application-specific optimization that can provide performance improvements for applications with little spatial locality and / or little time locality. Since address calculation is shuffled to the access processor, it can also provide performance benefits to applications that have high overhead for performing address calculations.One example of an application that could benefit from a scatter / gather engine is an application that uses a stride access mode, such as matrix multiplication.The computing processor 104 and the access processor 112 may be used to improve the performance of matrix multiplication, where the access processor 112 performs indexing calculations, and the computing processor 104 performs multiplication. An example of a function for calculating a matrix multiplication of three matrices A, B, and C = A × B is shown below. This function assumes that all matrices (A, B, and C) have been initialized.MatrixMultiply (){ //matrixint A [N] [M], B [M] [L], C [N] [L];// C = A * Bfor (i = 0; i <N; i ++) for (j = 0; j <L; j ++) for (k = 0; k <M; k ++) C [i] [i] + = A [i] [k] * B [k] [j];}In the MatrixMultiply function shown above, this function can be divided into two independent functions. The first function is to calculate the address of the operand and take the operand, that is, C [i] [j], A [i] [k] And B [k] [j], the second function is to perform the calculation on the operand, that is, A × B. The second function (calculation) is executed in the calculation processor 104, and the first function (scatter / collect operation) is executed in the access processor 110.First, the computing processor 104 assigns a stream port 112 and maps an access pattern to the assigned stream port 112. Subsequently, the access processor 110 runs a memory access handler (software or hardware) to perform a scatter / gather operation and puts data into the stream port 112. At the same time, the computing processor 104 accesses data through the stream port 112. Finally, after completing the access mode, the computing processor 104 releases the stream port 112.In the MatrixMultiply function, matrix B may be optimized because it contains stride access, that is, column-by-column access. The stride access mode data structure (such as STRIDE_ACCESS_PATTERN shown below) is allocated and configured for matrix B and stored in the scatter / gather engine. The stride access pattern structure includes predefined fields, such as the size of the structure (AccessPatternSize), a pointer to the handler function of the access processor (* Handler), and a read / write flag (Read / Write). The other fields in the stride access pattern data structure are pattern dependent. For this mode, define the start address (StartAddress) of the matrix, the size of the data elements (ElementSize), the size of the rows and columns (RowSize, ColumnSize), and the number of access repeats (Repeat) for this mode. // Stride access mode data structurestructSTRIDE_ACCESS_PATTERN{unsigned AccessPatternSize;void (* Handler) (STREAM_PORT, ACCESS_PATTERN);bool ReadWrite;unsigned StartAddress; /// & B [0] [0]unsigned ElementSize; // sizeof (int)unsigned RowSize; // Lunsigned ColumnSize; // MunsignedRepeat; // N}After the stride access mode has been initialized in the scatter / collection engine, the matrix multiplication function may be modified to use the access processor 110 and the calculation processor 104. An example of a matrix multiplication function that runs on the computing processor 104 and uses a scatter / gather engine to calculate the address of the operand and take the operand is shown below. MatrixMultiply () {//matrixint A [N] [M], B [M] [L], C [N] [L];// Stream portSTREAM_PORT PortB;// open portPortB = OPEN_PORT (WAIT);// Configure the portCONFIGURE_PORT ( PortB, STRIDE_ACCESS_PATTERN (sizeof (STRIDE_ACCESS_PATTERN), StrideHandler, READ, & B [0] [0], sizeof (int), L, M, N));// C = A * Bfor (i = 0; i <N; j ++) for (j = 0; j <L; j ++) for (k = 0; k <M; k ++) C [i] [j] + = A [i] [k] * ReadPort (PortB);// Close the portCLOSE_PORT (PortB);}The stream port 112 is opened for 'PortB' by waiting for an "OPEN_PORT" instruction to which the port is assigned. After the ports are allocated, the ports are configured by loading the stride access mode parameters into the stride access mode data structure as described above. Then, the flow port 112 is configured in a stride access mode by the "CONFIGURE_PORT" instruction. In this embodiment, portB is initialized to transfer data from the main memory 118 to a read port of the computing processor 104.Data calculations are performed on ReadBrr on 'PortB' instead of on Matrix B. When the matrix multiplication is completed, 'PortB' is closed via 'CLOSE_PORT' in order to release the allocated resources for use by another port.The 'MatrixMultiply ()' function runs on the calculation processor 104, while the 'StrideHandler ()' function shown below runs on the access processor 110. The StrideHandler () function runs on the access processor 110 to perform a scatter / gather operation. The processor is associated with a specific mode. In this example, the processor takes two input parameters, port and mode. 'Port' specifies a communication channel (stream port) to the computing processor 104, and the mode provides access mode information. According to the information from the access mode defined in the access mode data structure, the StrideHandler () function calculates the storage address, reads the data stored at the calculated storage address, and writes the read data (value) to the stream port. Used by the computing processor 104 to obtain the operands of the MatrixMultiply function run by the computing processor 104. void StrideHandler (STREAM_PORT Port, ACCESS_PATTERN Pattern) { // Column by column access for (k = 0; k <Pattern.Repeat; k +-) for (i = 0; i <Pattern.RowSize; i ++) for (j = 0; j <Pattern.ColumnSize; j ++) { // Read from memory Value = ReadMemory (Pattern.StartAddress + (i + j * Pattern.RowSize) * Pattern.ElementSize); // write port WritePort (Port, Value); } }The access processor 110 generates an address sequence and passes it to the cache interface 106 via a ReadMemory instruction. The cache interface 106 introduces data into the stream port 112. If the data already resides in the L1 cache 108, the L2 cache 114, or another stream port 112, the cache interface 106 obtains the data from the corresponding cache or stream port 112. Otherwise, the multi-channel memory controller 116 obtains data from the main memory 118. Finally, the computing processor 104 reads or writes data through the stream port 112 according to whether the port is initialized as a read port or a write port.In the illustrated embodiment, the programmable access processor runs the memory access software indicated above as 'StrideHandler ()'. However, in other embodiments, the same functionality as 'StrideHandler ()' may be implemented as hard-wired logic. Programmable access processors provide the flexibility to support many access modes, while hard-wired logic provides higher performance and power efficiency at the cost of reduced flexibility.In one embodiment, the streaming port 112 supports a streaming data access model. In the streaming data access model, immediately after data is accessed, it is discarded from the buffer in stream port 112 (in the case of a read operation) or written back to memory 118 (in the case of a write operation) ).Data coherence issues can arise between the cache hierarchy and the flow port 112. For example, the computing processor 104 may access the same data through the stream port 112 while the data is buffered in the cache hierarchy, or the computing processor 104 may access the same data through the cache hierarchy while the data is buffered in the stream port. Access data.Data coherence is supported by enhanced mutual exclusion. The cache interface 106 monitors memory access through the cache hierarchy and the stream port 112 and takes corresponding coherent actions. If there is a request to access data through the cache hierarchy, the same data is invalidated from the stream port 112. Similarly, if there is a request to access data through the stream port 112, the same data is invalidated in the cache hierarchy. Therefore, data coherence is guaranteed because valid data can only be stored in the cache hierarchy or in a buffer in the stream port 112.In one embodiment, the directory-based coherence protocol is modified to treat the stream port 112 as another cache and maintain the directory entries accordingly. For read misses, query the directory to find the current owner of the data and get the most recent data from it. For write misses, the directory is queried to find the full owner of the copy of the data. Invalidate the copy and take ownership.The method used to invalidate data in the cache is the same as in traditional directory-based protocols. However, invalidating the data in the stream port 112 requires a different mechanism from the cache due to the streaming data organization. First, the cache keeps data at the granularity of the cache line size, so tag overhead is tolerable. However, since the stream port 112 manages data with byte granularity, in the worst case, the label overhead is extremely large. Second, the data is placed into the stream port 112 in a first-in-first-out (FIFO) order. Therefore, the stream port 112 needs to perform a full-correlation search on the coherence action, because the corresponding data may be located anywhere in the data buffer in the stream port 112. The logic for a full associative search is physically large and consumes much more power than a simple search. Therefore, invalidation mechanisms such as caches are too expensive for the flow port 112. For example, an invalidation mechanism such as the cache of stream port 112 with a 1KB data buffer typically requires 8KB tags (64-bit address tags per 1 byte of data) and 1024 concurrent comparisons (full search for 1K entries) ) Logic.Assume that most programs access a given data item either through the cache or through the stream port 112 instead of both, that is, the program does not frequently pass data between the cache and the stream port 112 concurrently, not every The data element maintains an address label, but the address range of each flow port 112 is maintained on the flow port 112 and on a common cache of all levels, which is only the L2 cache 114 in the illustrated embodiment. The address range tracks the lower and upper limits of the addresses currently buffered in the stream port 112. Whenever the stream port 112 accesses data, the address range is expanded to include new data items when necessary. For example, if stream port 112 accesses addresses in the order of 0x10, 0x09, 0x05, and 0x07, the address range for stream port 112 is changed from (0x10, 0x10) to (0x09, 0x10), to (0x05, 0x10), and Change to (0x05, 0x10). When the shared cache, in the illustrated embodiment, only the L2 cache 114 determines the owner set of a piece of data, they compare the address of the data to the address range of all flow ports 112. All flow ports 112 with matching ranges are considered the owner of the copy of the data. When the stream port 112 receives the invalidation request, the requested address is compared with the address range. If there is a match, the entire flow port 112 is invalidated, not just the corresponding data.FIG. 2 is a block diagram illustrating a plurality of stream ports for providing a communication mechanism between the computing processor 104 and the access processor 110 in the multi-core processor 100 shown in FIG. 1. Each of the stream ports 112_1, ... 112_N includes a stream port context and a data buffer. The stream port context holds control information for the stream port 112, and the data buffer temporarily stores data.A set of instructions and library functions are provided to manage and access any of the flow ports 112_1, ... 112_N shown in FIG. 2.The operation type of the stream port 112 can be indicated by a status of "Port_Type" indicating whether the operation type is read or write. In one embodiment, for a stream port with a read operation type, the computing processor 104 may execute only the 'port_read' instruction, and the access processor 110 may execute only the 'port_write' instruction. For stream ports 112 having a write operation type, the opposite limitation applies. Synchronization is implicit in the 'port_read' and 'port_write' instructions. For example, the 'port_read' instruction stops if there is no data ready in the stream port 112, and the 'port_write' instruction stops if there is no empty space in the stream port 112.In one embodiment, the data buffers in each stream port 112 are dual-ported, allowing the computing processor 104 and the access processor 110 to read or write simultaneously. Supports access to data of different sizes, such as 1, 2, 4, 8 and 16 bytes. Perform data format conversions such as size expansion, zero expansion, truncation, or saturation.The stream port 112 and the access processor 110 may be managed by an operating system. For example, the operating system may maintain a resource table to track the list of free resources and indicate which computing processor 104 has allocated a particular flow port 112 and access processor 110.Functions (instructions) to open and close the stream port 112 are provided to allow a user application to allocate (open) or release (close) a particular stream port 112 and access the processor 110. Instructions may also be provided to provide data protection and manage access to the processor 110.The availability of the stream port 112 and the access processor 110 is not guaranteed. Therefore, when an instruction to allocate a port (open_port) is issued, the user application may wait until the stream port 112 and the access processor 110 are available, or may store it by caching instead of the stream port 112 when receiving an indication that the stream port 112 is not available Fetch memory.When the stream port 112 is assigned to the computing processor 104, the process identifier (ID) associated with the stream port 112 is set to be the same as the process ID of the computing processor 104. Each stream port 112 has an internal register for storing a process ID associated with the stream port 112. The process ID can be set by using the 'port_set_id' instruction.Data protection is provided through the use of process IDs. Prevent the computing processor 104 from accessing the incorrect stream port because the instructions (port_read, port_write) to read and write data to the stream port 112 match only the process ID of the compute processor 104 and the process ID stored in the internal register in stream port 112 Only then is it effective.The resource table may be used to locate an access processor 110 that has been assigned to a particular computing processor 104. When the access processor 110 is configured, for example, by a special instruction (ap_launch), the internal registers of the access processor are initialized, and the program counter is initialized with the address (or function pointer) of the processor. Therefore, the computing processor 104 may run the processor only on the access processor 110 that has been assigned to the computing processor, thereby providing access processor-level protection.The storage addresses accessible by the access processor 110 may be limited to those storage addresses accessible by the computing processor 104 associated with the access processor 110 and the stream port 112. The storage address restriction can be performed by the address translation mechanism based on the process ID. An instruction such as 'ap_set_id' may be provided to set the process ID of the access processor 110.The computing processor 104 may be multi-threaded, where each thread has its own context, ie, a program counter and thread-local registers. Each thread has an associated state that may be inactive, running, ready to run, or sleep. When the thread of the computing processor 104 is severed, ie, there is a context switch, all allocated stream ports 112 and access processors 110 for that thread are also released. When the thread subsequently re-accesses, the stream port 112 and the access processor 110 are assigned again. Provide instructions (port_context_in, port_context_out) for performing context switching. These instructions save or load the stream port context.In order to cut off the thread, i.e., to perform a context switch, a 'port_context_out' instruction is issued to each of the stream ports, and an 'ap_context_out' instruction is issued to each of the access processors 110 assigned to the thread. Then update the resource table.For a write port, context switching is performed after the data elements in stream port 112 are written to memory. In one embodiment, the 'port_context_out' instruction writes all internal register values of the stream port to memory, and the 'ap_context_out' instruction writes all internal register values of the access processor to memory.To access the thread, check the resource table to determine if the required stream ports and access processors are available. If so, assign the stream port and access processor. Issue a 'port_context_in' instruction for each assigned flow port, and issue an 'ap_context_in' instruction for each access processorThe context switch instruction only stores and loads access mode information, that is, control information. For write ports, the buffer is always empty when a context switch occurs, as described earlier. For a read port, data that is discarded during a context switch is re-fetched when the context is reconnected.Thread migration is handled by a similar mechanism. If a thread migrates from one computing processor 104 to another computing processor 104N, the stream ports and access processors are all released from the old computing processor 104N. New resources are allocated in another computing processor 104N. If the required resources are not available in another computing processor 104N, the thread may be severed from the computing processor 104. The thread waits in another computing processor 104N in a suspended state.FIG. 3 is a flowchart of one embodiment of a method for managing and accessing any of the flow ports shown in FIG. 2.At block 300, an 'open_port' instruction is issued to allocate a flow port. Processing continues at block 302.At block 302, when an 'open_port' instruction is issued to allocate a port, the availability of the stream port 112 and the access processor 110 is not guaranteed. Therefore, the 'open_port' instruction may include a period of time to wait for an available flow port. Upon receiving an indication that a flow port is available, processing continues at block 304. If the stream port is not available, processing continues to block 312 to access the memory through the cache instead of the stream port 112.At block 304, after the stream port 112 is assigned to the computing processor 104, the process identifier (ID) of the stream port 112 is set to be the same as the process ID of the computing processor 104. Each stream port 112 has an internal register for storing a process ID associated with the stream port 112. For example, a 'port_set_id' instruction may be issued to set the process identifier field with the identifier of the process that owns the assigned flow port 112.At block 306, after the stream port 112 has been assigned and the port ID has been set, the 'port_read' and 'port_write' instructions may be issued to read and write data through the stream port instead of through the cache hierarchy, respectively. Data protection is provided through the use of process IDs, as described above.At block 308, if a request is received from the computing processor 104 to close the stream port, such as by a 'close_port' instruction, processing continues at block 310. If no request is received, processing continues at block 306 to process a read or write request directed through the stream port.At block 310, the flow port is closed and the allocated resources are released.At block 312, the request for the flow port is rejected. Programmers have two options: wait and retry, or use a cache hierarchy instead of a stream port.4 and 5 are block diagrams of embodiments of a scatter / gather engine 400, 500 including an access processor with a programmable engine. Programmable engines have the flexibility to adapt to any access mode and are useful when they need to support many different access modes. In the embodiment shown in FIG. 4, the scatter / collection engine includes a stream port 112, an access processor 110, and a cache interface 106.Referring to FIG. 4, the computing processor 104 may be any conventional processor that includes support for the aforementioned flow port instructions. The access processor 110 is a programmable engine or a dedicated processor optimized for address calculation and memory access. In one embodiment, the access processor 110 does not include an arithmetic unit, such as a multiplier or divider, but includes multiple adders or shifters for fast address calculations.The access processor 110 collects the data read from the main memory 118 and forwards it to the computing processor, and distributes the data received from the computing processor 104 to the main memory 118. Therefore, the access processor 110 has two data access interfaces, one for the computing processor 104 and the other for the main memory 118. The interface to the computing processor 104 is through the stream port 112 and the interface to the memory is through the multi-channel memory controller 116. The access processor 110 issues a scatter / gather load and store request ('sg_load', 'sg_store') to the main memory 118 to perform a scatter / gather operation. The scatter / gather load and store requests utilize sub-cache line granular data transfer supported by the multi-channel storage controller 116. For example, in response to a 'port_read' request received from stream port 112, the access processor generates a 'sg_load' request to the memory to access data at a sub-cache line size.Looking at FIG. 5, in this embodiment, the functions of the access processor 110 shown in FIG. 4 are implemented by an access thread 504 running in a concurrent multi-threaded (SMT) processor 502. The SMT processor 502 runs a computing thread 506 and an access thread 504. In another embodiment, multiple cores on a chip-level multiprocessing (CMP) architecture processor may be used such that the computing thread 506 runs on one core and the access thread 504 runs on another core. This embodiment uses the 'port_read' and 'port_write' instructions, and also includes scatter / gather load and store instructions ('sg_load', 'sg_store' ) 的 Storage Unit 508.When the number of access patterns is limited, a dedicated access pattern generator may be included in the access processor 110. 6 and 7 are block diagrams of an embodiment of an access pattern generator that can be included in the access processor 110 shown in FIG. 1 to optimize address calculations.Referring to FIG. 6, an embodiment of an access pattern generator 600 that can be included in the access processor shown in FIG. 1 is described. The access pattern generator 600 is dedicated to stride access patterns that can access discontinuous addresses, such as 1, 5, 9, ... The two internal registers (base address register 602, stride register 604) are set by the computing processor 104 for a specific access mode. The 'base address register' 602 stores a virtual storage address of a first data element, and the 'stride register' 604 stores a stride between successive storage elements. For example, for stride access modes 1, 5, and 9, the stride is 4. The address calculator 606 calculates the virtual storage address by adding the content of the base address register 602 and the content of the stride register 604. A translation lookaside buffer (TLB) 608 is used to translate a virtual storage address into a physical storage address.For example, the base address register 602 can be initialized to 0xF0000 and the stride register is initialized to 4. The address calculator calculates the next address by adding 4 to 0xF0000, and outputs virtual storage addresses 0xF0004, 0xF0008, 0xF000C, and so on.Referring to FIG. 7, another embodiment of the address generator is described. The address generator 700 generates an indirect access pattern. It does not calculate the address directly. Instead, the calculation processor 104 uses the address of the index vector to initialize the 'index register' 702. Then, the storage interface 704 loads the index vector elements stored in the memory into the 'address register' 706. Finally, the TLB 708 translates the virtual address received from the address register 706 into a physical address.For example, sparse matrix dense vector multiplication is one example of an application in which indirect access patterns can be employed. The function shown below performs sparse matrix dense vector multiplication. The function calculates C = A × B, where A is a sparse matrix, and B and C are dense vectors. SparseMatrixDenseVectorMultiply () {// A: Sparse matrix in compressed row storage format// B, C: dense vector int Arow [N], Acol [NonZero], Adata [NonZero]; int B [N], C [N]; // C = A * B for (i = 0; i <N; i ++) for (j = Arow [i]; j <Arow [i + l]; j ++) C [i] + = A [j] * B [Acol [j]];}Create an indirect access mode data structure for the indirect access mode of matrix B, as shown below. The mode data structure is similar to the stride access mode described above, but in this example, the indirect access mode data structure defines the start address of the data vector (DataAddress), the start address of the index vector (IndexAddress), data Element size (ElementSize) and stream length (StreamLength). // Indirect access mode data structurestructINDIRECT_ACCESS_PATTERN{ unsigned AccessPatternSize; void (* Handler) (STREAM_PORT, ACCESS_PATTERN); bool ReadWrite; unsigned DataAddress; /// & B unsigned IndexAddress; /// & Acol unsigned ElementSize; // sizeof (int) unsigned StreamLength; // NoneZero}The sample code shown below can run on the computing processor 104 and is a scattered / collected version of the sparse matrix dense vector multiplication code.SparseMatrixDenseVectorMultiply (){// matrix and vectorint Arow [N], Acol [NonZero], Adata [NonZero];int B [N], C [N];// Stream port STREAM_PORT PortB;// open port PortB = OPEN_PORT (WAIT);// Configure the port CONFIGURE_PORT ( PortB, INDIRECT_ACCESS_PATTERN (sizeof (INDIRECT_ACCESS_PATTERN), IndirectHandlcr, READ, & B, & Acol, sizeof (int), NonZero));// C = A * B for (i = 0; i <N; i ++) for (j = Arow [i]; j <Arow [i + 1]; j ++) C [i] + = A [j] * ReadPort (PortB); // Close the port CLOSE_PORT (PortB);}The 'IndirectHandler ()' function shown below can be run on the access processor 110. In one embodiment, the hard-wired logic shown in FIG. 7 performs the same operation. The 'IndirectHandler ()' function loads the indexed value, calculates the data address, reads the memory, and writes the value to stream port 112.void IndirectHandler (STREAM_PORT Port, ACCFSS_PATTERN Pattern){// Indirect accessfor (i = 0; i <PatternStreamLength; i ++){ // Calculate the index Index = ReadMemory (Pattern.IndexAddress + i);// Read from memory Value = ReadMemory (Pattern.DataAddress + (Index * Pattern.ElementSize));// write port WritePort (Port, Valuc);}}Looking again at FIG. 1, the cache interface 106 provides data coherence between the cache (L1 cache 108, L2 cache 114) and the stream port. After the access processor 110 calculates the address, it requests the cache interface 106 to load data to or store data from the stream port 112. In the memory hierarchy shown in FIG. 1, the target data to be read or the target buffer to be written can be set in the L1 cache of the local core 102, the L1 cache of the remote core 102N, 108N, and the shared L2 cache 114. Or in main memory 118. In addition, the target data may also be located in the flow port 112 of the local core 106 or the flow port 112N of the remote core 106N. The cache interface 106 identifies the correct target location.A similar situation occurs when the computing processor 104 loads or stores data through a cache. The target location may be in L1 cache 108 of local core 102, L1 cache 108N, L2 cache 114, main memory 118 of remote core 102N, stream port 112 of local core 102, or stream port 112N of remote core 102N. In traditional multi-processor systems, the cache coherence protocol enables the computing processor 104 to obtain the latest copy of the data with the necessary access permissions. However, due to the addition of the stream port 112, the coherence protocol is extended to support data coherence between the cache and the stream port.In one embodiment, the cache interface is directly connected to the multi-channel memory controller 116. For each request to the streaming port 112, the cache interface 106 makes a request to the multi-channel storage controller 116 and loads data to / from the main memory 118 regardless of the actual target location.For example, if the core 102 writes a data location through the cache 108, the corresponding cache line is put into the cache 108 in a dirty exclusive state. Then, if the core 102N attempts to read the same location through the stream port 112N, the cache interface 106N loads the stale data from the main memory 118 because the cache 106N does not know that the L1 cache 108 of the core 102 has the most recent data. To prevent such data irrelevance, the cache line is flushed from the cache 108 and the L2 cache 114 to the main memory 118 before the core 102N reads the data location. Proper synchronization is performed between writing to the cache by the core 102 and reading from the stream port 112N by the core 102N.In another embodiment, the cache interface 106 provides full data coherency support. Whenever the computing processor 104 accesses the stream port 112, the cache interface 106 looks up the correct location of the latest data. In this embodiment, the cache interface 106N of the core 102N determines that the cache 108 has the latest data, and therefore, the cache interface 106N obtains data from the cache 108 rather than from the main memory 118. Traditional caching guarantees data coherence only for the cache and not for the flow port 112. For example, if the core 106 attempts to read the data position through the cache when the stream port 112N of the core 106N has the same data, the cache coherency protocol is extended so that the cache can obtain data from the stream port 112N of the core 106N.For applications that access the same data through the cache and stream ports 112, 112N at the same time, that is, if a large amount of communication is required between the cache and the stream port 112, the embodiment of the access processor described in conjunction with FIGS. It can provide better performance than the embodiment described in connection with FIGS. 4 and 5. Otherwise, the embodiments of the access processors described in conjunction with FIGS. 4 and 5 are better because they do not suffer from coherence overhead and are less costly in terms of space and power requirements.FIG. 8 is a block diagram of one embodiment of a storage system 800 that supports cache line size data transfer and sub-cache line size data transfer. The storage space 802 is divided into a plurality of channels 806, and each channel 806 is divided into a plurality of memory banks 808. Traditional memory systems, such as double data rate dynamic random access memory (DDR RAM), provide a small number of wide memory access channels. Although they are efficient for large cache line size data transfers, the scatter / gather architecture requires sub-cache line size data transfers. In the embodiment of FIG. 8, multiple channels 806 are assigned to each storage controller 804 in order to provide a fast narrow multi-channel storage controller to support efficient scatter / collection operations. The multi-channel memory controller 804 saves off-chip memory bandwidth and also reduces memory access latency. Dispersion / collection technology improves off-chip bandwidth efficiency by accessing data with less than conventional granularity, allowing only useful data to be fetched according to a given access mode.Those of ordinary skill in the art are well aware that the methods involved in the embodiments of the present invention may be implemented by a computer program product including a computer-usable medium. For example, such a computer-usable medium may include a read-only storage device such as a compact disk read-only memory (CD ROM) disk or a conventional ROM device or a computer magnetic disk in which computer-readable program code is stored.Although the embodiments of the present invention have been specifically illustrated and described with reference to the embodiments of the present invention, those skilled in the art will understand that they can make various changes in form and detail without departing from the scope of the appended claims. The scope of embodiments of the invention. |
Semiconductor devices with redistribution structures and associated systems and methods are disclosed herein. In one embodiment, a semiconductor package includes a first semiconductor die including a first redistribution structure; and a second semiconductor die including a second redistribution structure. The first and second semiconductor dies may be mounted on a package substrate such that the first and second redistribution structures are aligned with each other. In some embodiments, an interconnect structure may be between the first and second semiconductor dies to electrically couple the first and second redistribution structures to each other. The first and second redistribution structures may be configured such that signal routing between the first and second semiconductor dies may vary based on a location of the interconnect structure. |
1. A semiconductor package comprising:package substrate;A first semiconductor die mounted to the package substrate, the first semiconductor die comprising a first redistribution structure comprising -a first signal trace electrically coupled to the first die contact, the first interconnect pad, and the first package contact, anda second signal trace electrically coupled to a second interconnect pad and a second package contact, wherein the second signal trace is electrically isolated from the first signal trace; andA second semiconductor die mounted to the first semiconductor die, the second semiconductor die including a second redistribution structure having a third signal trace electrically coupled to the second a die contact, a third interconnection pad, and a fourth interconnection pad,Wherein (1) the first interconnection pad of the first redistribution structure is aligned with the third interconnection pad of the second redistribution structure such that the first and third interconnection pads can be controlled by interconnect structure bridging, and (2) the second interconnect pad of the first redistribution structure is aligned with the fourth interconnect pad of the second redistribution structure such that the second and first Four interconnect pads can be bridged by interconnect structures.2. The semiconductor package of claim 1, further comprising an interconnect structure electrically coupling the first and second semiconductor die.3. The semiconductor package of claim 2, wherein the interconnect structure comprises solder bumps.4. The semiconductor package of claim 2, wherein the interconnect structure connects the first interconnect pad to the third interconnect pad such that both the first and second die contacts are electrically coupled to the first package contact.5. The semiconductor package of claim 2, wherein the interconnect structure connects the second interconnect pad to the fourth interconnect such that the first die contact is electrically coupled to the A first package contact and the second die contact are electrically coupled to the second package contact.6. The semiconductor package of claim 1, wherein:the first redistribution structure is located on the upper surface of the first semiconductor die;the second redistribution structure is on the lower surface of the second semiconductor die; andThe first and second semiconductor die are mounted on the packaging substrate with the upper surface of the first semiconductor die facing the lower surface of the second semiconductor die.7. The semiconductor package of claim 1, wherein:the first die contact, the first interconnect pad, and the second interconnect pad at an interior portion of the first semiconductor die;said first and second package contacts are at said peripheral portion; andThe second die contact, the third interconnect pad, and the fourth interconnect pad are at an interior portion of the second semiconductor die.8. The semiconductor package of claim 1, wherein the first and second package contacts are electrically coupled to respective first and second bond pads on the package substrate via wire bonds.9. The semiconductor package of claim 8, wherein the package substrate is coupled to a first electrical connector and a second electrical connector, the first electrical connector is electrically coupled to the first bond pad, and The second electrical connector is electrically coupled to the second bond pad.10. The semiconductor package of claim 9, wherein the first and second electrical connectors are individual solder balls of a ball grid array.11. The semiconductor package of claim 9 , wherein the first and second bonding pads are electrically coupled to the first and second electrical connectors via first and second wiring structures, respectively, and wherein the first The first and second wiring structures are wired on different layers of the packaging substrate.12. A method of manufacturing a semiconductor package, the method comprising:forming a first redistribution structure on a first semiconductor die, the first redistribution structure comprising -a first signal trace electrically coupled to the first die contact, the first interconnect pad, and the first package contact, anda second signal trace electrically coupled to a second interconnect pad and a second package contact;forming a second redistribution structure on the second semiconductor die, the second redistribution structure including a third signal trace having a third signal trace electrically coupled to the second die contact , the third interconnection pad and the fourth interconnection pad; andBased on the design of the semiconductor package, selecting a first location or a second location for an interconnect structure between the first and second semiconductor die, wherein -When in the first position, the interconnect structure is between the first interconnect pad and the third interconnect pad so as to electrically couple the second die contact to the first a package contact, andWhen in the second position, the interconnect structure is between the second interconnect pad and the fourth interconnect pad so as to electrically couple the second die contact to the first two package contacts; andThe first and second semiconductor die are electrically coupled with the interconnect structure in the selected first or second location.13. The method of claim 12, wherein the first signal trace is electrically isolated from the second signal trace.14. The method of claim 12, wherein the design is a x4 and/or x8 package design, and the method further comprises positioning the interconnect structure in the first location such that the first and a second die contact electrically coupled to the first package contact.15. The method of claim 14, further comprising transmitting a signal from one or more of the first die contact or the second die contact to the first package contact.16. The method of claim 12, wherein the design is a x16 package design, and the method further comprises positioning the interconnect structure in the second location such that the first die contact Electrically coupled to the first package contact and the second die contact is electrically coupled to the second package contact.17. The method of claim 16, further comprising:transmitting a first signal from the first die contact to the first package contact; andA second signal is transmitted from the second die contact to the second package contact, wherein the second signal is different than the first signal.18. The method of claim 12, further comprising:mounting the first semiconductor die on a packaging substrate; andThe second semiconductor die is mounted on the first semiconductor die.19. The method of claim 18, wherein the first and second semiconductor die are mounted in a face-to-face configuration.20. The method of claim 18, further comprising wire bonding the first semiconductor die to the packaging substrate.21. The method of claim 20, further comprising transmitting signals from the second semiconductor die to the packaging substrate via the first and second redistribution structures and the interconnect structure. |
Semiconductor device with redistribution structure configured for switchable routingtechnical fieldThe present technology relates generally to semiconductor devices, and more specifically, to semiconductor devices having redistribution structures configured to accommodate different packaging designs.Background techniquePackaged semiconductor die, including memory chips, microprocessor chips, and imager chips, typically include the semiconductor die mounted on a substrate and encased in a protective covering. A semiconductor die may include functional features such as memory cells, processor circuits, and imager devices, and bond pads electrically connected to the functional features. The bond pads can be electrically connected to terminals outside the protective covering to allow the semiconductor die to connect to higher level circuitry.Market pressures are continually driving semiconductor manufacturers to reduce the size of die packages to accommodate the space constraints of electronic devices, while also driving semiconductor manufacturers to increase the functional capacity per package to meet operating parameters. One method for increasing the processing capability of a semiconductor package without substantially increasing the surface area covered by the package (i.e., the "footprint" of the package) is to vertically stack multiple semiconductor die on top of each other in a single package. on top. The dice in such vertically stacked packages may be electrically coupled to each other and/or to the substrate via wires, interconnects, or other conductive structures. However, conventional structures and techniques for interconnecting vertically stacked semiconductor die may not be able to accommodate different semiconductor package designs.Description of drawingsMany aspects of the present technology can be better understood with reference to the drawings. Components in the figures are not necessarily to scale. Rather, the emphasis is on clearly illustrating the principles of the inventive technique.FIG. 1 is a side cross-sectional view of a semiconductor package configured in accordance with an embodiment of the present technology.2A and 2B are perspective views of first and second redistribution structures configured for use with different packaging designs in accordance with an embodiment of the present technology.3A and 3B illustrate a first semiconductor die and a second semiconductor die, respectively, configured in accordance with embodiments of the present technology.4A and 4B illustrate signal routing through the first and second semiconductor die of FIGS. 3A and 3B , respectively, in a first package design configured in accordance with embodiments of the present technology.5A and 5B illustrate signal routing through the first and second semiconductor die of FIGS. 3A and 3B , respectively, in a second package design configured in accordance with embodiments of the present technology.6A-6D illustrate signal routing through a packaging substrate configured in accordance with embodiments of the present technology.7 is a schematic diagram of a system including a semiconductor device or package configured in accordance with an embodiment of the present technology.detailed descriptionSpecific details of several embodiments of semiconductor devices and associated systems and methods are described below. Those skilled in the relevant art will recognize that suitable stages of the methods described herein may be performed at the wafer level or at the die level. Thus, depending on the context in which it is used, the term "substrate" may refer to a wafer-level substrate or may refer to a singulated die-level substrate. In addition, unless the context dictates otherwise, conventional semiconductor fabrication techniques may be used to form the structures disclosed herein. For example, material may be deposited using chemical vapor deposition, physical vapor deposition, atomic layer deposition, plating, electroless plating, spin coating, and/or other suitable techniques. Similarly, material may be removed using, for example, plasma etching, wet etching, chemical-mechanical planarization, or other suitable techniques.In several embodiments described below, a semiconductor package configured in accordance with the present techniques includes a first semiconductor die including a first redistribution structure; and a second semiconductor die including a second redistribution structure. The first and second semiconductor die can be mounted on the substrate in a face-to-face (F2F) configuration such that at least some components of the first redistribution structure are aligned with corresponding components of the second redistribution structure. The semiconductor package may further include at least one interconnect structure (eg, a solder bump) between the first and second redistribution structures to electrically connect the first and second semiconductor die to each other.In some embodiments, the first and second redistribution structures are each configured to be compatible with multiple package designs (eg, x4, x8, and/or x16 package designs). The location of the interconnect structures can be used to switch or otherwise change the routing of signals through the first and second redistribution structures to accommodate these different package designs. Therefore, rather than requiring different redistribution structures for different packages, the inventive technique may allow the same redistribution structure design to be used in different packages simply by changing the layout of the interconnect structure. Accordingly, the present techniques may be desirable to reduce cost and supply chain complexity and increase the efficiency and flexibility of design and manufacturing processes.Numerous specific details are disclosed herein to provide a thorough and useful description of embodiments of the present technology. Those skilled in the art will understand, however, that the technology may have additional embodiments and that the technology may be practiced without several of the details of the embodiments described below with reference to FIGS. 1-7 . For example, some details of semiconductor devices and/or packages that are well known in the art have been omitted so as not to obscure the present technology. In general, it should be understood that various other devices and systems other than those specific embodiments disclosed herein may be within the scope of the present technology.As used herein, the terms "vertical," "lateral," "upper," "lower," "above," and "below" may refer to the relative orientation of features in a semiconductor device, given the orientation shown in the figures. or location. For example, "upper" or "uppermost" may refer to a feature that is positioned closer to the top of the page than another feature. However, these terms should be broadly interpreted to encompass semiconductor devices having other orientations, such as inverted or oblique orientations, where top/bottom, above/below, above/below, up/down, and left/right Can be interchanged depending on orientation.1 is a side cross-sectional view of a semiconductor package 100 ("package 100") configured in accordance with an embodiment of the present technology. Package 100 may include a first semiconductor die 102a and a second semiconductor die 102b disposed over a package substrate 103 . The first and second semiconductor die 102a-b may each include a respective semiconductor substrate 104a-b (e.g., a silicon substrate, gallium arsenide substrate, gallium arsenide substrates, organic laminate substrates, etc.). In some embodiments, the first and second semiconductor die 102a-b are arranged vertically, with the second semiconductor die 102b mounted on the first semiconductor die 102a such that the lower surface 108b of the second semiconductor die 102b faces The upper surface 106a of the first semiconductor die 102a. The first semiconductor die 102a may be mounted on the packaging substrate 103 such that the lower surface 108a of the first semiconductor die 102a faces the packaging substrate 103 and is coupled to the packaging substrate 103 .In some embodiments, at least one of the surfaces of each of the first and second semiconductor die 102a-b is an active surface that includes various types of semiconductor components, such as memory circuits (e.g., DRAM access memory (DRAM), static random access memory (SRAM), flash memory, or other types of memory circuits), controller circuits (e.g., DRAM controller circuits), logic circuits, processing circuits, circuit elements (e.g., wire , traces, interconnects, transistors, etc.), imaging components, and/or other semiconductor features. The first and second semiconductor die 102a-b may be mounted such that the active surfaces of the semiconductor die 102a-b face each other (eg, a F2F configuration). For example, in the illustrated embodiment, the upper surface 106a of the first semiconductor die 102a and the lower surface 108b of the second semiconductor die 102b are active surfaces.The first and second semiconductor die 102a-b may pass through at least one interconnect structure 109 (e.g., bumps, microbumps, pillars, pillars, pillars, etc. - only a single interconnect structure is shown in FIG. 1 for clarity) are coupled to each other (eg, mechanically, thermally, and/or electrically). Each interconnect structure 109 may be formed from any suitable conductive material (e.g., copper, nickel, gold, silicon, tungsten, solder (e.g., SnAg-based solder), conductive epoxy, combinations thereof, etc., and may be formed by electroplating, Formed by electroless plating or another suitable process. In some embodiments, the interconnect structure 109 may also include a barrier material (eg, nickel, a nickel-based intermetallic compound, and/or gold). The barrier material may promote adhesion, and/or prevent or at least inhibit electromigration of copper or other metals used to form interconnect structure 109. Optionally, interconnect structure 109 may be surrounded by an underfill material (not shown) .Package substrate 103 may be or include an interposer, such as a printed circuit board, a dielectric spacer, another semiconductor die (eg, a logic die), or another suitable substrate. In some embodiments, the packaging substrate 103 includes additional semiconductor components (eg, doped silicon wafers or gallium arsenide wafers), non-conductive components (eg, various ceramic substrates, such as aluminum oxide (Al2O3), etc.), Aluminum nitride, and/or conductive portions (eg, interconnect circuitry, through silicon vias (TSVs), etc.). Package substrate 103 may further include electrical connectors 124 (eg, solder balls, conductive bumps, conductive posts, conductive epoxy, and/or other suitable conductive elements) electrically coupled to package substrate 103 and configured to electrically couple the package 100 to an external device or circuitry (not shown).The package 100 may further include a molding material 126 formed over the package substrate 103 and/or at least partially surrounding the first and second semiconductor die 102a-b. The molding material 126 may be a resin, epoxy, silicone-based material, polyimide, or suitable for encapsulating at least a portion of the first and second semiconductor die 102a-b and/or the packaging substrate 103 to Any other material that protects these components from contamination and/or physical damage. In some embodiments, the semiconductor package 100 includes other components such as external heat sinks, sleeves (eg, thermal sleeves), electromagnetic interference (EMI) shielding components, and the like.In some embodiments, the first and second semiconductor die 102a-b each include respective redistribution layers or structures. For example, as shown in FIG. 1, a first semiconductor die 102a includes a first redistribution structure 110a formed on an upper surface 106a, and a second semiconductor die 102b includes a second redistribution structure 110a formed on a lower surface 108b. Distribution structure 110b. The first and second redistribution structures 110a-b may each include one or more conductive components such as contacts, traces, pads, pins, wires, circuitry, and the like. Components of the redistribution structures 110a-b may be made of any suitable conductive material, such as one or more metals (e.g., titanium, tungsten, cobalt, nickel, platinum, etc.), metal-containing compositions (e.g., metal suicides, metal nitrides, metal carbides, etc.), and/or conductively doped semiconductor materials (eg, conductively doped silicon, conductively doped germanium, etc.). In some embodiments, the redistribution fabric 110a-b is or includes an embedded redistribution layer (iRDL). The iRDL can be formed at a front-end stage of the fabrication process (eg, prior to wafer probe testing).The first and second redistribution structures 110a-b can be configured to electrically couple different portions of the respective semiconductor dies for routing signals therebetween. For example, the first redistribution structure 110a may include a first die contact or pin 114a and a package contact or pin 116 extending between and electrically coupling the first die contact or pin 114a and the package contact. Point or pin 116 of the first signal trace 112a. The first die contacts 114a and the package contacts 116 may be at different locations on the first semiconductor die 102a. For example, the first die contacts 114a may be located at or near a central and/or inner portion of the first semiconductor die 102a, while the package contacts 116 may be located at or near a peripheral portion of the first semiconductor die 102a. Package contacts 116 may be electrically coupled to corresponding bond pads 118 on package substrate 103 via conductive elements such as wires 120 (eg, wire bonds). Thus, signals originating from the first semiconductor die 102a may be transmitted to the package substrate 103 via the first redistribution structure 110a (eg, from the first die contacts 114a through the first signal traces 112a, package contacts 116, The wires 120 and the bonding pads 118 are transferred to the package substrate 103).In the illustrated embodiment, the second semiconductor die 102 b does not have any package contacts for direct coupling to the substrate 103 . Alternatively, the signal from the second semiconductor die 102b may be routed indirectly to the package substrate 103 via the first semiconductor die 102a, as described in more detail below. However, in other embodiments, the second semiconductor die 102b may include one or more package contacts configured to connect directly to the package substrate 103 (eg, via wire bonds) to allow the second semiconductor die 102b to Direct signal transmission with package substrate 103 . Optionally, some signals from the second semiconductor die 102b may be transmitted indirectly to the packaging substrate 103 via the first semiconductor die 102a, while other signals may be transmitted directly to the packaging substrate 103 .In the embodiment of FIG. 1 , the first and second redistribution structures 110 a - b and the interconnect structure 109 route signals from the second semiconductor die 102 b to the first semiconductor die 102 a and the packaging substrate 103 . The first signal trace 112a of the first redistribution structure 110a may be connected to a first interconnection pad 122a (eg, a bump pad). The first interconnect pad 122a may be at or near a center and/or inner portion of the first semiconductor die 102a, eg, proximate to the first die contact 114a. In some embodiments, the first interconnect pad 122a is located between the first die contact 114a and the package contact 116 along the first signal trace 112a. The second redistribution structure 110b may include a second die contact or pin 114b extending between and electrically coupling the second die contact or pin 114b and a second interconnect pad 122b (eg, a bump pad). and the second signal trace 112b of the second interconnection pad 122b. The second die contact 114b and the second interconnect pad 122b may be located near each other, for example at or near a center and/or inner portion of the second semiconductor die 102b.As shown in FIG. 1, when the first and second semiconductor die 102a-b are vertically arranged in an F2F configuration, the first and second redistribution structures 110a-b may face each other such that the first and second interconnects The pads 122a-b are aligned. The first and second interconnect pads 122a - b may be electrically coupled to each other via the interconnect structure 109 . Thus, signals originating from the second semiconductor die 102b may be transmitted to the package substrate 103 via the first and second redistribution structures 110a-b (eg, from the second die contacts 114b through the second signal traces 112b , second interconnection pad 122b, interconnection structure 109, first interconnection pad 122a, first signal trace 112a, package contact 116, wire 120, and bonding pad 118 are transferred to package substrate 103).In some embodiments, the first and second redistribution structures 110a-b are configured to accommodate different types of semiconductor package designs. For example, the first and second redistribution structures 110a-b can be used with at least two different package designs (eg, x4/x8 design and x16 design). In some embodiments, the x4/x8 package provides 8 different data channels, while the x16 design provides 16 different data channels. Different package designs may involve different signal routing between the die contacts of the first semiconductor die 102a, the die contacts of the second semiconductor die 102b, and the package contacts of the first semiconductor die 102a. In such embodiments, instead of changing the design of the first and second redistribution structures 110a-b, these Signal routing between components switches to different configurations.2A and 2B are perspective views of a first redistribution structure 200a and a second redistribution structure 200b configured for use with different packaging designs in accordance with an embodiment of the present technology. The first and second redistribution structures 200a-b may be incorporated into any of the embodiments described herein (eg, as part of the first and second redistribution structures 110a-b described with respect to FIG. 1 ). For example, a first redistribution structure 200a may be formed on the upper surface of a first semiconductor die (eg, a lower semiconductor die in an F2F semiconductor package—not shown), and a second redistribution structure 200b may be formed on a second On the lower surface of the second semiconductor die (eg, the upper semiconductor die in a F2F semiconductor package—not shown). In other embodiments, this configuration may be reversed such that the second redistribution structure 200b is formed on the upper surface of the first semiconductor die and the first redistribution structure 200a is formed on the lower surface of the second semiconductor die.Referring to FIGS. 2A and 2B together, the first redistribution structure 200 a includes a first signal trace 202 and a second signal trace 204 . The first signal trace 202 may electrically couple the first die contact 206 , the first interconnect pad 208 , and the first package contact 210 . The second signal trace 204 can electrically couple the second interconnect pad 212 to the second package contact 214 . In some embodiments, the first die contacts 206 include or are coupled to output pins (eg, data pins, address pins, control pins, etc.) of the first semiconductor die. The first interconnect pad 208 may be located between the first die contact 206 and the first package contact 210 along the first signal trace 202 . First package contacts 210 and second package contacts 214 may be configured to be electrically coupled to corresponding first and second bond pads of a package substrate via wire bonding or other techniques known to those skilled in the art. The first die contact 206, the first interconnect pad 208, and the second interconnect pad 212 may be located at a first portion (eg, a central and/or inner portion) of the first semiconductor die, and the first package contact 210 And the second package contact 214 may be located at a second different portion (eg, a peripheral portion) of the first semiconductor die. In some embodiments, the first signal trace 202 and the second signal trace 204 are spaced apart and/or electrically isolated from each other such that signals carried by the first signal trace 202 are not transmitted to the second signal trace 204, and vice versa.The second redistribution structure 200b includes a third signal trace 216 . The third signal trace 216 may electrically couple the second die contact 218 , the third interconnect pad 220 and the fourth interconnect pad 222 . In some embodiments, the second die contacts 218 include or are coupled to output pins (eg, data pins, address pins, control pins, etc.) of the second semiconductor die. The third interconnect pad 220 may be located between the second die contact 218 and the fourth interconnect pad 222 along the third signal trace 216 . The second die contact 218, the third interconnect pad 220, and the fourth interconnect pad 222 may be located at the center and/or inner portion of the second semiconductor die. In some embodiments, third signal trace 216 does not include any package contacts or other components directly connected to the package substrate.When the first and second semiconductor die are assembled in a F2F configuration, the first and second redistribution structures 200a-b may be positioned proximate to each other such that one or a portion of the first and second redistribution structures 200a-b are aligned And may be bridged by interconnect structure 224 . For example, in the illustrated embodiment, the first interconnect pad 208 of the first redistribution structure 200a is aligned with the third interconnect pad 220 of the second redistribution structure 200b such that the first interconnect pad 208 and The third interconnection pads 220 can be electrically and mechanically coupled to each other by the interconnection structure 224 . As can be seen in FIGS. 2A and 2B , the first interconnect pad 208 extends at least partially over the third interconnect pad 220 such that when viewed from directly above or directly below, the footprint of the first interconnect pad 208 The occupied area of the third interconnection pad 220 is at least partially covered. Optionally, the central vertical axis of the first interconnect pad 208 may be collinear or at least partially overlap the central vertical axis of the third interconnect pad 220 . The second interconnection pad 212 of the first redistribution structure 200a may be aligned with the fourth interconnection pad 222 of the second redistribution structure 200b in a similar manner. In some embodiments, an interconnect structure 224 is used to electrically couple the first and second redistribution structures 200a-b to each other. The positioning of interconnect structure 224 may be selected to form a desired signal routing path between first die contact 206 , second die contact 218 , first package contact 210 , and second package contact 214 .Referring to FIG. 2A , for example, in a first package design (eg, x4 and/or x8 package designs), an interconnect structure 224 can electrically and mechanically couple the first signal trace 202 of the first redistribution structure 200 a to the second package design. Second, the third signal trace 216 of the redistribution structure 200b. In the illustrated embodiment, the interconnect structure 224 is located between the first interconnect pad 208 of the first redistribution structure 200a and the third interconnect pad 220 of the second redistribution structure 200b, thereby connecting the first signal trace Line 202 is electrically coupled to third signal trace 216 . Thus, signals from the first die contacts 206 of the first semiconductor die and/or the second die contacts 218 of the second semiconductor die are transmitted to the first package contacts 210 .In the illustrated embodiment of FIG. 2A, there is no interconnect structure between the second interconnect pad 212 of the first redistribution structure 200a and the fourth interconnect pad 222 of the second redistribution structure 200b such that the first redistribution structure The second signal trace 204 of the distribution structure 200a remains electrically isolated from the third signal trace 216 of the second redistribution structure 200b. Accordingly, signals from the second die contacts 218 of the second semiconductor die are not transmitted to the second package contacts 214 . In some embodiments, the second package contacts 214 also do not receive any signals from the first semiconductor die because the second signal traces 204 are not connected to any die contacts on the first semiconductor die. Thus, the first package contacts 210 can transmit signals from the first and/or second semiconductor die, while the second package contacts 214 remain unused.2B, in a second package design (eg, x16 package design), an interconnect structure 224 can electrically and mechanically couple the second signal trace 204 of the first redistribution structure 200a to the second redistribution structure 200b The third signal trace 216 . In the illustrated embodiment, the interconnect structure 224 is located between the second interconnect pad 212 of the first redistribution structure 200a and the fourth interconnect pad 222 of the second redistribution structure 200b, thereby connecting the second signal trace Line 204 is electrically coupled to third signal trace 216 . Accordingly, a signal from the second die contact 218 of the second semiconductor die may be transmitted to the second package contact 214 . In some embodiments, the second package contacts 214 do not receive any signals from the first semiconductor die because the second signal traces 204 are not connected to any die contacts on the first semiconductor die.In the illustrated embodiment of FIG. 2B, there is no interconnect structure between the first interconnect pad 208 of the first redistribution structure 200a and the third interconnect pad 220 of the second redistribution structure 200b such that the first redistribution structure The first signal trace 202 of the distribution structure 200a remains electrically isolated from the third signal trace 216 of the second redistribution structure 200b. Accordingly, the first package contacts 210 may receive signals from the first die contacts 206 , but not the second die contacts 218 . In such embodiments, first package contacts 210 may transmit signals from a first semiconductor die, while second package contacts 214 may transmit signals from a second semiconductor die.The first and second redistribution structures 200a-b can be configured in many different ways to achieve the package-related signal routing described herein. For example, although in FIGS. 2A and 2B first interconnect pad 208 is between first die contact 206 and first package contact 210 , in other embodiments first die contact It may be between the first interconnect pad 208 and the first package contact 210 . As another example, the positions of the first signal trace 202 and the second signal trace 204 may be interchanged such that the first interconnection pad 208 of the first redistribution structure 200a and the fourth interconnection of the second redistribution structure 200b The pads 222 are aligned, and the second interconnect pad 212 of the first redistribution structure 200a is aligned with the third interconnect pad 220 of the second redistribution structure 200b. Optionally, the first die contact 206 may be omitted and/or the second signal trace 204 may be electrically coupled to the die contact. In some embodiments, the first redistribution structure 200a includes additional signal traces (eg, one, two, three, four, five, or more additional signal traces), each signal trace having a corresponding interconnect pads, and the second redistribution structure 200 b may contain a corresponding number of interconnect pads to allow the third signal trace 216 to be selectively connected to additional signal traces based on the positioning of the interconnect structure 224 .3A and 3B illustrate a first semiconductor die 102a and a second semiconductor die 102b , respectively, configured in accordance with embodiments of the present technology. More specifically, FIG. 3A is a top view of the upper surface 106a of the first semiconductor die 102a and FIG. 3B is a top view of the lower surface 108b of the second semiconductor die 102b. As previously described, the first and second semiconductor die 102a-b may be arranged in a F2F configuration, wherein the lower surface 108b of the second semiconductor die 102b is aligned with the upper surface 106a of the first semiconductor die 102a and located on the first over the upper surface 106a of the semiconductor die 102a. The first semiconductor die 102a includes a first redistribution structure 300a formed on the upper surface 106a, and the second semiconductor die 102b includes a second redistribution structure 300b formed on the lower surface 108b. The first and second redistribution structures 300a-b may be substantially similar to corresponding structures previously described with respect to Figures 1-2B.Referring to FIG. 3A , for example, a first redistribution structure 300 a may include signal traces 302 a , die contacts 304 a , interconnect pads 306 a and package contacts 308 . As can be seen in FIG. 3A, some signal traces 302a are connected to corresponding die contacts 304a, interconnect pads 306a, and package contacts 308 (e.g., signal trace 312), while other signal traces 302a are connected to corresponding Interconnect pads 306a and package contacts 308, but are not connected to any of die contacts 304a (eg, signal traces 314). The signal traces 302a can be spaced apart and/or electrically isolated from each other such that signal transmission can occur independently along each signal trace 302a.In the illustrated embodiment, the die contacts 304a are arranged in a single row along or near the central axis of the first semiconductor die 102a, and the interconnect pads 306a surround the die contacts. Both sides of row 304a are arranged in rows. The package contacts 308 may be arranged in two rows extending along two of the side edges of the first semiconductor die 102a, respectively. Accordingly, signal traces 302a may extend outward from a central portion to a peripheral portion of the semiconductor die in both directions to route signals from die contacts 304a and/or interconnect pads 306a to package contacts 308 . In other embodiments, the first redistribution structure 300a may be configured differently (eg, the die contacts 304a may be arranged in two or more rows, the package contacts 308 may be arranged along individual rows of the first semiconductor die 102a The side edges are arranged in a single row, the interconnect pads 306a can be arranged in fewer or more rows, the interconnect pads 306a can be located on a single side of the row of die contacts 304a, etc.).Referring to FIG. 3B, the second redistribution structure 300b may include signal traces 302b, die contacts 304b, and interconnect pads 306b. As can be seen in FIG. 3B, each signal trace 302b may be connected to a corresponding die contact 304b and at least two interconnect pads 306b. The signal traces 302b may be spaced apart and/or electrically isolated from each other such that signals may be transmitted independently along each signal trace 302b. In the illustrated embodiment, the second redistribution structure 300b does not contain any package contacts for direct connection to the package substrate. However, in other embodiments, the second redistribution structure 300b may include one or more package contacts for direct connection to the package substrate.In the illustrated embodiment, the die contacts 304b are arranged in a single row along or near the central axis of the second semiconductor die 102b, and the interconnect pads 306b surround the die contacts. Both sides of row 304b are arranged in rows. In other embodiments, the second redistribution structure 300b may be configured differently (eg, die contacts 304b may be arranged in two or more rows, interconnect pads 306b may be arranged in fewer or fewer rows, interconnected The pads 306b may be located on a single side of the row of die contacts 304b, etc.).2A and 2B may switch or otherwise change based on the positioning of interconnect structures (eg, solder balls) between the first and second semiconductor die 102a-b as previously described with respect to FIGS. 2A and 2B . Signal routing between the first and second semiconductor die 102a-b of the redistribution structure 300a-b. In some embodiments, the arrangement of the interconnect pads 306a of the first semiconductor die 102a may be the same or substantially similar to the arrangement of the interconnect pads 306b of the second semiconductor die 102b. Accordingly, the first and second redistribution structures 300a-b may be bridged by interconnect structures located between interconnect pads 306a-b, as previously described.Optionally, the interconnect pads 306a of the first redistribution structure 300a may be arranged in pairs (or larger groupings) to allow switchable signal routing between corresponding pairs of signal traces 302a, and the interconnection pads of the second redistribution structure 300b The interconnection pads 306b may be arranged in pairs (or larger groupings) to align with the first redistribution structure 300a pair. For example, a pair of interconnect pads 310a ("pair 310a") of the first redistribution structure 300a may be aligned with a corresponding pair of interconnect pads 310b ("pair 310b") of the second redistribution structure 300b to Switchable signal routing through the pair of signal traces 312, 314 of the first semiconductor die 102a is permitted.4A and 4B illustrate movement through the first and second semiconductor die 102a-b, respectively, of FIGS. signal routing. In the illustrated embodiment, some interconnect pads 306a-b are connected ("connected") by an interconnect structure (not shown), while other interconnect pads 306a-b are not connected by any interconnect structure ("not connected") . Depending on the arrangement of the interconnect structure, each signal trace 302a and package contact 308 may: (1) receive a signal from the first semiconductor die 102a, but not the second semiconductor die 102b ("Die 1"), (2) receive signals from the second semiconductor die 102b instead of the first semiconductor die 102a ("Die 2"), (3) receive signals from the first and/or second semiconductor die 102a-b ("Die 2") 1 and/or die 2"), or (4) receive no signal from either the first or second semiconductor die 102a-b ("none").For example, in the illustrated embodiment, a first interconnect pad 316a of pair 310a is connected to a second interconnect pad 316b of pair 310b by an interconnect structure (not shown), while the remaining interconnects of pairs 310a-b The pads are not connected to each other. Thus, signals from the die contacts 318a of the first semiconductor die 102a and/or the die contacts 318b of the second semiconductor die 102 can be transmitted to the package contacts 320 via the signal traces 312, while the signal traces 314 and package contacts 322 remain unused and do not receive signals from either the first or second semiconductor die 102a-b.5A and 5B illustrate signal routing through the first and second semiconductor die 102a-b, respectively, of FIGS. 3A and 3B in a second package design (eg, x16 package design) configured in accordance with embodiments of the present technology. In the second package design, the location of some or all interconnect structures (not shown) may be different than in the first package design. Accordingly, the signal routing for some or all of the signal trace 302a and package contacts 308 may differ from the routing in the first package design. For example, signal trace 302a and package contact 308 that previously received signals from first and second semiconductor die 102a-b may now receive only from first semiconductor die 102a or only from second semiconductor die 102b signals; previously unused signal traces 302a and package contacts 308 may now receive signals from the first and/or second semiconductor die 102a-b; and so on.For example, in the illustrated embodiment, the second interconnect pad 317a of pair 310a is connected to the second interconnect pad 317b of pair 310b by an interconnect structure (not shown), while the remaining pairs of pairs 310a-b are interconnected. The pads are not connected to each other. Thus, signal trace 312 and package contact 320 receive a signal from die contact 318a of first semiconductor die 102a, while signal trace 314 and package contact 322 receive a signal from die contact 318b of second semiconductor die 102b. receive signal.The inventive technique can provide switchable routing for many different types of signals in semiconductor packages, such as data signals, control signals, address signals, calibration signals, or any other signal type known to those skilled in the art. According to the techniques described herein, the connection and configuration of signals can be changed in a package-dependent manner as desired.6A-6D illustrate signal routing through packaging substrate 103 configured in accordance with embodiments of the present technology. More specifically, FIG. 6A is a top view of the first wiring layer 600a of the packaging substrate 103, FIG. 6B is a top view of the second wiring layer 600b of the packaging substrate 103, and FIG. 6C is a third wiring layer 600c of the packaging substrate 103. and FIG. 6D is a top view of the package substrate 103 with wiring layers 600a-c overlapping each other. Package substrate 103 may be incorporated into any of the embodiments of semiconductor packages described herein (eg, package 100 of FIG. 1 ).The first wiring layer 600a of the packaging substrate 103 can be electrically coupled to the lower surface of the first semiconductor die 102a (only the outline of the first semiconductor die 102a is shown in FIGS. position). The second wiring layer 600b can be electrically coupled to an array of electrical connectors 124 (eg, a ball grid array - FIGS. 6B and 6D include only the outlines of the electrical connectors 124 to illustrate positioning relative to the packaging substrate 103 ). As previously described with respect to FIG. 1 , electrical connectors 124 may be used to electrically couple package substrate 103 to external devices or other higher level components to allow transmission of signals thereto. In some embodiments, the packaging substrate 103 is electrically coupled to the first semiconductor die 102a via wires (not shown). Wires may connect package contacts (not shown) on the first semiconductor die 102 a to corresponding bond pads 118 included in or electrically coupled to the first wiring layer 600 a of the package substrate 103 . Each bond pad 118 may be electrically coupled to a corresponding electrical connector 124 via a wire, trace, metal layer or structure, via, or other conductive feature extending along and/or through the wiring layers 600 a - c of the package substrate 103 . .In some embodiments, the number and/or positioning of the bond pads 118 relative to the electrical connectors 124 may make it difficult or impossible to route all of the electrical interconnections between the bond pads 118 and the electrical connectors 124 on the package substrate 103. in a single layer. For example, the position of the bond pads 118 may be constrained by the geometry of the first semiconductor die 102a. Since the width of the first semiconductor die 102 a approaches the width of the array of electrical connectors 124 , signal routing through the packaging substrate 103 can become more congested and more challenging. To ameliorate these issues, package substrate 103 may route electrical interconnections between bond pads 118 and electrical connectors 124 on multiple layers (eg, at least two, three, four, or more layers). For example, a first subset of signals from bond pads 118 may be routed through first wiring layer 600a (“Subset 1”), and a second subset of signals may be routed through second wiring layer 600b (“Subset 2 ”). Subset"), a third subset of signals may be routed through the third wiring layer 600c ("Subset 3"), and so on.In the illustrated embodiment, bond pads 118 include, for example, a first subset of bond pads 610, a second subset of bond pads 620, and optionally a third subset of bond pads 630 (only for The reference numbers for the individual instances of each subset are shown for clarity). In some embodiments, the first bond pad 610 corresponds to a first set of data signals (e.g., upper byte) for the semiconductor package, and the second bond pad 620 corresponds to a second set of data signals (e.g., lower byte). section), and the third bond pad 630 corresponds to other signals (eg, control signals, address signals, calibration signals, power signals, etc.). Each subset of bond pads 118 may be electrically coupled to a corresponding subset of the array of electrical connectors 124 via a corresponding wiring structure. For example, first bonding pads 610 may be connected to a first subset of electrical connectors 612 via a first wiring structure 614, and second bonding pads 620 may be connected to a second subset of electrical connectors 622 via a second wiring structure 624. set, and optionally, a third subset of bond pads 630 may be connected to a third subset of electrical connectors 632 via wiring structures 634 .Signals from the first bonding pad 610 may be routed in the first wiring layer 600a. Accordingly, as shown in FIG. 6A , the first wiring structure 614 may be located in the first wiring layer 600 a and may extend from the first bonding pad 610 to the corresponding first via hole 616 . In some embodiments, the first bonding pad 610 is located at or near a peripheral portion of the packaging substrate 103 , while the first via 616 is located at or near an inner portion of the packaging substrate 103 away from the first bonding pad 610 . The first via 616 may be located adjacent to the first electrical connector 612 to provide electrical connection thereto. As shown in FIGS. 6A and 6B , for example, each of the first vias 616 may extend through the first wiring layer 600 a to a second wiring layer 600 b that is adjacent to or near the corresponding first electrical connector 612 . Location.Signals from the second bonding pad 620 may be routed in the second wiring layer 600b instead of the first wiring layer 600a. Therefore, as shown in FIG. 6A, in the first wiring layer 600a, the second bonding pad 620 may be connected to a corresponding second via hole located near the second bonding pad 620 (for example, near the peripheral portion of the package substrate 103). 626. As shown in FIGS. 6A and 6B , the second via 626 may extend through the first wiring layer 600 a to a location in the second wiring layer 600 b away from the corresponding second electrical connector 622 . The second wiring structure 624 may be located in the second wiring layer 600 b and may extend from the second via 626 to the second electrical connector 622 .Referring to FIG. 6C together, the package substrate 103 may optionally include a third wiring layer 600c between the first and second wiring layers 600a-b. In such embodiments, the first via 616 and the second via 626 may extend through the wiring layer 600c. The third wiring layer 600 c may also be used to route signals from the third bonding pad 630 . Therefore, as shown in FIG. 6A, in the first wiring layer 600a, the third bonding pad 630 may be connected to the third via hole 626a located near the third bonding pad 630 (for example, near the peripheral portion of the package substrate 103). . As shown in FIGS. 6A and 6C, the third via 636 may extend through the first wiring layer 600a and into the third wiring layer 600c. The third wiring structure 634 may be located in the third wiring layer 600c and may extend from the third via hole 636 to the fourth via hole 638 . The fourth through hole 638 may be spaced apart from the third through hole 636 . As shown in FIGS. 6B and 6C , the fourth via 638 may extend through the third wiring layer 600c to a location in the second wiring layer 600b that is adjacent to or near the corresponding third electrical connector 632 .Figure 6D shows the packaging substrate 103 with wiring layers 600a-c overlapping each other. As can be seen from the illustrated embodiments, the use of multiple wiring layers as described herein enables multiple complex interconnections between bond pads 118 and electrical connectors 124 . In other embodiments, the package substrate 103 may include fewer or more wiring layers (eg, one, two, four, five, or more wiring layers), each wiring layer including Corresponding wiring structures for routing signals between a subset of pads 118 and electrical connectors 124 . Package substrate 103 may also include additional layers not shown in FIGS. 6A-6D . For example, package substrate 103 may include one or more layers of insulating material between wiring layers to reduce or prevent electrical interference. Package substrate 103 may also include one or more layers of material configured to provide structural support and/or mechanical strength.Any of the semiconductor devices and/or packages having the features described above with reference to FIGS. 1 through 6D can be incorporated into any of a number of larger and/or more complex systems, representative examples of which is a system 700 shown schematically in FIG. 7 . System 700 may include processor 702 , memory 704 (eg, SRAM, DRAM, flash memory, and/or other memory devices), input/output devices 706 , and/or other subsystems or components 708 . The semiconductor die and/or packages described above with reference to FIGS. 1-6D may be included in any of the elements shown in FIG. 7 . The resulting system 700 may be configured to perform any of a variety of suitable computing, processing, storage, sensing, imaging, and/or other functions. Accordingly, representative examples of system 700 include, but are not limited to, computers and/or other data processors, such as desktop computers, laptop computers, network appliances, handheld devices (e.g., palmtop computers, wearable computers, cellular or mobile phones, personal digital assistants, music players, etc.), tablet computers, multiprocessor systems, processor-based or programmable consumer electronics, network computers, and microcomputers. Additional representative examples of system 700 include lights, cameras, vehicles, and the like. As with these and other examples, system 700 may be housed in a single unit or distributed over multiple interconnected units, such as through a communications network. Accordingly, the components of system 700 may include local and/or remote memory storage devices and any of a wide variety of suitable computer-readable media.In view of the foregoing, it should be appreciated that specific embodiments of the inventive technology have been described herein for purposes of illustration, but that various modifications may be made without departing from the disclosure. Accordingly, the invention is not to be limited except as by the appended claims. Furthermore, certain aspects of the novel techniques described in the context of particular embodiments can also be combined or eliminated in other embodiments. Furthermore, while advantages associated with certain embodiments of the new technology have been described in the context of those embodiments, other embodiments may exhibit such advantages, and not all embodiments will demonstrate such advantages to fall within the scope of the technology of the present invention. Accordingly, the present disclosure and associated technology can encompass other embodiments not expressly shown or described herein. |
A package-on-package (PoP) device includes a first package, a second package, and a bi-directional thermal electric cooler (TEC). The first package includes a first substrate and a first die coupled to the first substrate. The second package is coupled to the first package. The second package includes a second substrate and a second die coupled to the second substrate. The TEC is located between the first die and the second substrate. The TEC is adapted to dynamically dissipate heat back and forth between the first package and the second package. The TEC is adapted to dissipate heat from the first die to the second die in a first time period. The TEC is further adapted to dissipate heat from the second die to the first die in a second time period. The TEC is adapted to dissipate heat from the first die to the second die through the second substrate. |
A package on package (PoP) device comprising: a first package comprising: a first substrate; and a first die coupled to the first substrate; a second package coupled to the first package, the second package comprising: a second substrate; and a second die coupled to the second substrate; and a bi-directional thermal electric cooler (TEC) located between the first die and the second substrate, wherein the bi-directional TEC is adapted to dynamically dissipate heat back and forth between the first package and the second package.The PoP device of claim 1, wherein the bi-directional TEC is adapted to dissipate heat from the first die to the second die in a first time period.The PoP device of claim 2, wherein the bi-directional TEC is further adapted to dissipate heat from the second die to the first die in a second time period.The PoP device of claim 2, wherein the bi-directional TEC is adapted to dissipate heat from the first die to the second die through the second substrate.The PoP device of claim 1, wherein the bi-directional TEC is coupled to the first die through a first thermally conductive adhesive.The PoP device of claim 1, wherein the bi-directional TEC is an array of a plurality of thermal electric coolers (TECs).The PoP device of claim 1, wherein the bi-directional TEC is electrically coupled to a TEC controller through a plurality of interconnects that includes interconnects in the first die. |EPO DP num="33"|The PoP device of claim 1, wherein the bi-directional TEC is electrically coupled to a TEC controller through a plurality of interconnects that includes interconnects in the first encapsulation layer.The PoP device of claim 1, wherein the bi-directional TEC is electrically coupled to a TEC controller through a plurality of interconnects that includes interconnects in the second substrate.The PoP device of claim 1, wherein the first die is a first logic die and the second die is one of at least a second logic die or a memory die.The PoP device of claim 1, wherein the first package further comprises a third die coupled to the first substrate, wherein the bi-directional TEC is further adapted to dynamically dissipate heat back and forth between the first die and the third die.The PoP device of claim 1, wherein the first package further comprises a third die coupled to the first substrate, wherein the PoP device further comprises a second bi- directional TEC, wherein the combination of the bi-directional TEC and the second bi- directional TEC are adapted to dynamically dissipate heat back and forth between the first die and the third die.The PoP device of claim 1, wherein the PoP device is incorporated into a device selected from a group comprising of a music player, a video player, an entertainment unit, a navigation device, a communications device, a mobile device, a mobile phone, a smartphone, a personal digital assistant, a fixed location terminal, a tablet computer, a computer, a wearable device, a laptop computer, a server, and a device in a automotive vehicle, and further including the device.A package on package (PoP) device comprising: a first package comprising: a first substrate; and a first die coupled to the first substrate; a second package coupled to the first package, the second package comprising: a second substrate; and |EPO DP num="34"| a second die coupled to the second substrate; and a bi-directional heat transfer means located between the first die and the second substrate, wherein the bi-directional heat transfer means is configured to dynamically dissipate heat back and forth between the first package and the second package.The PoP device of claim 14, wherein the bi-directional heat transfer means is configured to dissipate heat from the first die to the second die in a first time period.The PoP device of claim 15, wherein the bi-directional heat transfer means is further configured to dissipate heat from the second die to the first die in a second time period.The PoP device of claim 15, wherein the bi-directional heat transfer means is configured to dissipate heat from the first die to the second die through the second substrate.The PoP device of claim 14, wherein the bi-directional heat transfer means is an array of a plurality of thermal electric coolers (TECs).The PoP device of claim 15, wherein the PoP device is incorporated into a device selected from a group comprising of a music player, a video player, an entertainment unit, a navigation device, a communications device, a mobile device, a mobile phone, a smartphone, a personal digital assistant, a fixed location terminal, a tablet computer, a computer, a wearable device, a laptop computer, a server, and a device in a automotive vehicle, and further including the device.A method for thermal management of a package on package (POP) device, comprising: receiving a first temperature reading of a first die; receiving a second temperature reading of a second die; determining whether the first temperature reading of the first die is equal or greater than a first maximum temperature of the first die; determining whether the second temperature reading of the second die is equal or greater than a second maximum temperature of the second die; |EPO DP num="35"| configuring a bi-directional thermal electric cooler (TEC) to dissipate heat from the first die to the second die when (i) the first temperature reading is equal or greater than the first maximum temperature, and (ii) the second temperature reading is less than the second maximum temperature; and configuring the bi-directional thermal electric cooler (TEC) to dissipate heat from the second die to the first die when (i) the second temperature reading is equal or greater than the second maximum temperature, and (ii) the first temperature reading is less than the first maximum temperature.The method of claim 20, wherein configuring the bi-directional TEC to dissipate heat from the first die to the second die comprises configuring a TEC controller to send a first signal to the bi-directional TEC, wherein the first signal has a first polarity.The method of claim 21, wherein configuring the bi-directional TEC to dissipate heat from the second die to the first die comprises configuring the TEC controller to send a second signal to the bi-directional TEC, wherein the second signal has a second polarity that is opposite to the first polarity.The method of claim 20 further comprising configuring the bi- directional TEC to be inactive when (i) the first temperature reading is less than the first maximum temperature, and (ii) the second temperature reading is less than the second maximum temperature.The method of claim 20, wherein the method of receiving the first temperature reading, receiving the second temperature reading, determining whether the first temperature reading of the first die is equal or greater than a first maximum temperature of the first die, and determining whether the second temperature reading of the second die is equal or greater than a second maximum temperature of the second die is performed by a thermal controller.The method of claim 24, wherein the thermal controller is implemented in the first die of the PoP device.The method of claim 20, wherein receiving the first temperature reading of the first die comprises receiving at least one first temperature from at least one first |EPO DP num="36"| temperature sensor, and wherein receiving the second temperature reading of the second die comprises receiving at least one second temperature from at least one second temperature sensor.The method of claim 21 further comprising instructing the first die to reduce a first die performance when (i) the first temperature reading is equal or greater than the first maximum temperature, and (ii) the second temperature reading is equal or greater than the second maximum temperature.The method of claim 27 further comprising configuring the bi- directional TEC to be inactive when (i) the first temperature reading is equal or greater than the first maximum temperature, and (ii) the second temperature reading is equal or greater than the second maximum temperature.The method of claim 21 further comprising instructing the second die to reduce a second die performance when (i) the first temperature reading is equal or greater than the first maximum temperature, and (ii) the second temperature reading is equal or greater than the second maximum temperature.A processor readable storage medium comprising one or more instructions for performing thermal management of a package on package (POP) device, which when executed by at least one processing circuit, causes the at least one processing circuit to: receive a first temperature reading of a first die; receive a second temperature reading of a second die; determine whether the first temperature reading of the first die is equal or greater than a first maximum temperature of the first die; determine whether the second temperature reading of the second die is equal or greater than a second maximum temperature of the second die; configure a bi-directional thermal electric cooler (TEC) to dissipate heat from the first die to the second die when (i) the first temperature reading is equal or greater than the first maximum temperature, and (ii) the second temperature reading is less than the second maximum temperature; and configure the bi-directional thermal electric cooler (TEC) to dissipate heat from the second die to the first die when (i) the second temperature reading is equal or greater |EPO DP num="37"| than the second maximum temperature, and (ii) the first temperature reading is less than the first maximum temperature. |
CA 02981824 2017-10-03 WO 2016/183099 PCT/US2016/031671 |EPO DP num="1"|PACKAGE-ON-PACKAGE (POP) DEVICE COMPRISING BI-DIRECTIONAL THERMAL ELECTRIC COOLER CROSS-REFERENCE TO RELATED APPLICATIONS [0001] This application claims priority to and the benefit of U.S. non-provisional patent application no. 14/709,276 filed in the United States Patent and Trademark Office on May 11, 2015 the entire content of which is incorporated herein by reference. BACKGROUND Field [0002] Various features relate to a package-on-package (PoP) device that includes a bi-directional thermal electric cooler (TEC). Background [0003] FIG. 1 illustrates an integrated device package 100 that includes a first die 102 and a package substrate 106. The package substrate 106 includes a dielectric layer and a plurality of interconnects 110. The package substrate 106 is a laminated substrate. The plurality of interconnects 110 includes traces, pads and/or vias. The first die 102 is coupled to the package substrate 106 through the first set of solder balls 112. The package substrate 106 is coupled to the PCB 108 through the second set of solder balls 116. FIG. 1 also illustrates a heat spreader 120 coupled to the die 102. An adhesive or thermal interface material may be used to couple the heat spreader 120 to the die 102. As shown in FIG. 1, the heat spreader 120 is adapted to dissipate heat away from the die 102 to an external environment. It is noted that heat may dissipate away from the die in various directions. [0004] One drawback of the above configuration is that the heat spreader 120 is a passive heat dissipating device. Thus, there is no active control of how heat is dissipated. That is, the use of heat spreader 120 does not allow for a dynamic heat flow control. Second, the use of the heat spreader 120 is only applicable when a single die is used in the integrated device package. Today's mobile devices and/or wearable devices include many dies, and thus are more complicated configurations that require more intelligent thermal and/or heat dissipation management. Putting a heat spreader in a CA 02981824 2017-10-03 WO 2016/183099 PCT/US2016/031671 |EPO DP num="2"|device that includes several dies will not provide effective thermal and/or heat dissipation management of the device. [0005] Therefore, there is a need for an device that includes several dies and an effective thermal management of the device, while at the same time meeting the needs and/or requirements of mobile computing devices and/or wearable computing devices. SUMMARY [0006] Various features relate to a package-on-package (PoP) device that includes a bi-directional thermal electric cooler (TEC). [0007] A first example provides a package on package (PoP) device that includes a first package and a second package coupled to the first package. The first package includes a first substrate, and a first die coupled to the first substrate. The second package includes a second substrate, and a second die coupled to the second substrate. The package on package (PoP) device also includes a bi-directional thermal electric cooler (TEC) located between the first die and the second substrate, where the bi- directional TEC is adapted to dynamically dissipate heat back and forth between the first package and the second package. [0008] A second example provides a package on package (PoP) device that includes a first package and a second package coupled to the first package. The first package includes a first substrate, and a first die coupled to the first substrate. The second package includes a second substrate, and a second die coupled to the second substrate. The package on package (PoP) device also includes a bi-directional heat transfer means located between the first die and the second substrate, where the bi- directional heat transfer means is configured to dynamically dissipate heat back and forth between the first package and the second package. [0009] A third example provides a method for thermal management of a package on package (POP) device. The method receives a first temperature reading of a first die. The method receives a second temperature reading of a second die. The method determines whether the first temperature reading of the first die is equal or greater than a first maximum temperature of the first die. The method determines whether the second temperature reading of the second die is equal or greater than a second maximum temperature of the second die. The method configures a bi-directional thermal electric cooler (TEC) to dissipate heat from the first die to the second die when (i) the first temperature reading is equal or greater than the first maximum temperature, and (ii) the CA 02981824 2017-10-03 WO 2016/183099 PCT/US2016/031671 |EPO DP num="3"|second temperature reading is less than the second maximum temperature. The method configures the bi-directional thermal electric cooler (TEC) to dissipate heat from the second die to the first die when (i) the second temperature reading is equal or greater than the second maximum temperature, and (ii) the first temperature reading is less than the first maximum temperature. [0010] A fourth example provides a processor readable storage medium comprising one or more instructions for performing thermal management of a package on package (POP) device, which when executed by at least one processing circuit, causes the at least one processing circuit to determine whether the first temperature reading of the first die is equal or greater than a first maximum temperature of the first die; determine whether the second temperature reading of the second die is equal or greater than a second maximum temperature of the second die; configure a bi-directional thermal electric cooler (TEC) to dissipate heat from the first die to the second die when (i) the first temperature reading is equal or greater than the first maximum temperature, and (ii) the second temperature reading is less than the second maximum temperature; and configure the bi-directional thermal electric cooler (TEC) to dissipate heat from the second die to the first die when (i) the second temperature reading is equal or greater than the second maximum temperature, and (ii) the first temperature reading is less than the first maximum temperature. DRAWINGS [0011] Various features, nature and advantages may become apparent from the detailed description set forth below when taken in conjunction with the drawings in which like reference characters identify correspondingly throughout. [0012] FIG. 1 illustrates an integrated device package. [0013] FIG. 2 illustrates a profile view of an example of a package-on- package (PoP) device that includes a bi-directional thermal electric cooler (TEC). [0014] FIG. 3 illustrates an example of a heat transfer flow in a package- on-package (PoP) device that includes a bi-directional thermal electric cooler (TEC). [0015] FIG. 4 illustrates an example of a heat transfer flow in a package- on-package (PoP) device that includes a bi-directional thermal electric cooler (TEC). [0016] FIG. 5 illustrates a profile view of another example of a package- on-package (PoP) device that includes a bi-directional thermal electric cooler (TEC). [0017] FIG. 6 illustrates a profile view of a bi-directional thermal electric cooler. CA 02981824 2017-10-03 WO 2016/183099 PCT/US2016/031671 |EPO DP num="4"|[0018] FIG. 7 illustrates an angled view of a bi-directional thermal electric cooler. [0019] FIG. 8 illustrates an angled view of an assembly bi-directional thermal electric coolers (TECs). [0020] FIG. 9 illustrates an example of how a thermal electric cooler comprising several bi-directional thermal electric coolers (TECs) may be configured. [0021] FIG. 10 illustrates a configuration of how a bi-directional thermal electric cooler (TEC) may be controlled by a thermal controller. [0022] FIG. 11 illustrates another configuration of how a bi-directional thermal electric cooler (TEC) may be controlled by a thermal controller. [0023] FIG. 12 illustrates another configuration of how a bi-directional thermal electric cooler (TEC) may be controlled by a thermal controller. [0024] FIG. 13 illustrates a profile view of an example of a package-on- package (PoP) device that includes a bi-directional thermal electric cooler (TEC), where several exemplary electrical paths are highlighted. [0025] FIG. 14 illustrates a profile view of an example of a package-on- package (PoP) device that includes a bi-directional thermal electric cooler (TEC), where several exemplary electrical paths are highlighted. [0026] FIG. 15 illustrates a profile view of an example of a package-on- package (PoP) device that includes a bi-directional thermal electric cooler (TEC), where several exemplary electrical paths are highlighted. [0027] FIG. 16 illustrates several temperature graphs and a TEC current graph to illustrate how the operation of a TEC may affect the temperature of several dies in a package-on-package (PoP) device. [0028] FIG. 17 illustrates an exemplary flow diagram of a method for configuring a bi-directional thermal electric cooler (TEC) and controlling the temperatures of dies in a package-on-package (PoP) device. [0029] FIG. 18 illustrates another exemplary flow diagram of a method for configuring a bi-directional thermal electric cooler (TEC) and controlling the temperatures of dies in a package-on-package (PoP) device. [0030] FIG. 19 (which includes FIGS. 19A-19B) illustrates an exemplary sequence for fabricating a package-on-package (PoP) device that includes a bi- directional thermal electric cooler (TEC). [0031] FIG. 20 illustrates an exemplary flow diagram of a method for fabricating a package-on-package (PoP) device that includes a bi-directional thermal electric cooler. CA 02981824 2017-10-03 WO 2016/183099 PCT/US2016/031671 [0032] FIG. 21 illustrates a profile view of another example of a package-on- package (PoP) device that includes a bi-directional thermal electric cooler (TEC). [0033] FIG. 22 illustrates various electronic devices that may integrate a package- on-package (PoP) device, an integrated device package, a semiconductor device, a die, an integrated circuit and/or PCB described herein. DETAILED DESCRIPTION [0034] In the following description, specific details are given to provide a thorough understanding of the various aspects of the disclosure. However, it will be understood by one of ordinary skill in the art that the aspects may be practiced without these specific details. For example, circuits may be shown in block diagrams in order to avoid obscuring the aspects in unnecessary detail. In other instances, well-known circuits, structures and techniques may not be shown in detail in order not to obscure the aspects of the disclosure. [0035] The present disclosure describes a package on package (PoP) device that includes a first package, a second package, and a bi-directional thermal electric cooler (TEC). The first package includes a first substrate and a first die coupled to the first substrate. The second package is coupled to the first package. The second package includes a second substrate and a second die coupled to the second substrate. The bi- directional TEC is located between the first die and the second substrate. The bi- directional TEC is adapted to dynamically dissipate heat back and forth between the first package and the second package. The bi-directional TEC is adapted to dissipate heat from the first die to the second die in a first time period. The bi- directional TEC is further adapted to dissipate heat from the second die to the first die in a second time period. The bi-directional TEC is adapted to dissipate heat from the first die to the second die through the second substrate. Exemplary Package on Package (PoP) Device Comprising Bi-Directional Thermal Electric Cooler [0036] FIG. 2 illustrates an example of a package on package (PoP) device 200 that includes a first package 202 (e.g., first integrated device package), a second package 204 (e.g., second integrated device package), and a thermal electric cooler (TEC) 210. [0037] The first package 202 includes a first substrate 220, a first die 222, and a first encapsulation layer 224. The first package 202 may also include the TEC 210. The TEC CA 02981824 2017-10-03 WO 2016/183099 PCT/US2016/031671 |EPO DP num="6"|210 is coupled to the first die 222. An adhesive 270 (e.g., thermally conductive adhesive) may be used to couple the TEC 210 to the first die 222. The adhesive 270 may couple a first surface (e.g., bottom surface) of the TEC 210 to a back side of the first die 222. The TEC 210 may be a bi-directional TEC capable of dissipating heat in a first direction (e.g., in a first time period / frame) and a second direction (e.g., in a second time period / frame), where the second direction is opposite to the first direction. More specifically, the TEC 210 may be a bi-directional TEC that may be configured and/or adapted to dynamically (e.g., in real time during operation of the PoP device 200) dissipate heat back and forth between the first package 202 and the second package 204. The TEC 210 may be a bi-directional heat transfer means. The TEC 210 may provide active heat dissipation (e.g., active heat transfer means). Various examples of TECs are further illustrated and described in detail below in at least FIGS. 6-9. [0038] The first substrate 220 may be a package substrate. The first substrate 220 includes at least one dielectric layer 226, several interconnects 227, a first solder resist layer 228, and a second solder resist layer 229. The first solder resist layer 228 is on a first surface (e.g., bottom surface) of the first substrate 220. The second solder resist layer 229 is on a second surface (e.g., top surface) of the first substrate 220. The dielectric layer 226 may include a core layer and/or a prepeg layer. The interconnects 227 may include several traces, vias, and/or pads. The interconnects 227 may be located in the dielectric layer 226 and/or on a surface of the dielectric layer 226. [0039] An interconnect is an element or component of a device (e.g., integrated device, integrated device package, die) and/or a base (e.g., package substrate, printed circuit board, interposer) that allows or facilitates an electrical connection between two points, elements and/or components. In some implementations, an interconnect may include a trace, a via, a pad, a pillar, a redistribution metal layer, and/or an under bump metallization (UBM) layer. In some implementations, an interconnect is an electrically conductive material that is capable of providing an electrical path for a signal (e.g., data signal, ground signal, power signal). An interconnect may include more than one element / component. A set of interconnects may include one or more interconnects. [0040] The first die 222 is coupled to (e.g., mounted) the first substrate 220 through a set of solder 225 (e.g., solder balls). The first die 222 may be a logic die (e.g., central processing unit (CPU), graphical processing unit (GPU)). The first die 222 may be a flip chip. The first die 222 may be coupled to the first substrate 220 differently in different implementations. For example, the first die 222 may be coupled to the first substrate CA 02981824 2017-10-03 WO 2016/183099 PCT/US2016/031671 |EPO DP num="7"|220 through pillars and/or solder. Other forms of interconnects may be used to couple the first die 222 to the first substrate 220. [0041] The first encapsulation layer 224 encapsulates at least part of the first die 222. The first encapsulation layer 224 may include a mold and/or an epoxy fill. The first encapsulation layer 224 may include several solder 230, 232, 234, and 236 (e.g., solder balls). The solder 230, 232, 234, and 236 may be coupled to the interconnects 227. [0042] The first package 202 is coupled to (e.g., mounted on) a printed circuit board (PCB) 250 through a set of solder balls 252. The set of solder balls 252 is coupled to the interconnects 227. However, it is noted that the first package 202 may be coupled to the PCB 250 by using other means, such as a land grid array (LGA) and/or a pin grid array (PGA). [0043] The second package 204 includes a second substrate 240, a second die 242, and a second encapsulation layer 244. The second substrate 240 may be a package substrate. The second substrate 240 includes at least one dielectric layer 246, several interconnects 247, a first solder resist layer 248, and a second solder resist layer 249. The first solder resist layer 248 is on a first surface (e.g., bottom surface) of the second substrate 240. The second solder resist layer 249 is on a second surface (e.g., top surface) of the second substrate 240. The dielectric layer 246 may include a core layer and/or a prepeg layer. The interconnects 247 may include several traces, vias, and/or pads. The interconnects 247 may be located in the dielectric layer 246 and/or on a surface of the dielectric layer 246. [0044] The second die 242 is coupled to (e.g., mounted) the second substrate 240 through a set of solder balls 245. The second die 242 may be a logic die or a memory die. The second die 242 may be a flip chip. The second die 242 may be coupled to the second substrate 240 differently in different implementations. For example, the second die 242 may be coupled to the second substrate 240 through pillars and/or solder. Other forms of interconnects may be used to couple the second die 242 to the second substrate 240. The second encapsulation layer 244 encapsulates at least part of the second die 242. The second encapsulation layer 244 may include a mold and/or an epoxy fill. [0045] The second package 204 is coupled (e.g., mounted) to the first package 202 such that the TEC 210 is between the first package 202 and the second package 204. As shown in FIG. 2, the TEC 210 is located between the first die 222 and the second substrate 240. An adhesive 272 (e.g., thermally conductive adhesive) may be used to couple the TEC 210 to the second substrate 240. The adhesive 272 may couple a second CA 02981824 2017-10-03 WO 2016/183099 PCT/US2016/031671 |EPO DP num="8"|surface (e.g., top surface) of the TEC 210 to the first solder resist layer 248. In some implementations, the adhesive 272 may couple the second surface of the TEC 210 to the dielectric layer 246. The second package 204 may be coupled to the first package 202 so that at least part of the second die 242 is vertically aligned with the TEC 210 and/or the first die 222. The second package 204 may be electrically coupled to the first package 202 through the solder 230, 232, 234 and 236. The solder 230, 232, 234, and 236 may be coupled to the interconnects 247. [0046] As mentioned above, the TEC 210 may be a bi-directional TEC capable of dissipating heat in a first direction (e.g., in a first time period / frame) and a second direction (e.g., in a second time period / frame), where the second direction is opposite to the first direction. [0047] FIGS. 3- 4 illustrate examples of how the TEC 210 may be adapted and/or configured to dissipate heat. FIG. 3 illustrates the TEC 210 adapted to dissipate heat from the first package 202 to the second package 204 during a first time period. At or during the first time period, the TEC 210 is adapted to dissipate heat from the first die 222 to the second package 204. The heat that is dissipated from the first die 222 may pass through the TEC 210, the second substrate 240 (which includes the dielectric layer 246, the interconnects 247), the solder balls 245, the second die 242, and/or the second encapsulation layer 244. Thus, some of the heat from the first die 222 may heat the second die 242. [0048] FIG. 4 illustrates the TEC 210 adapted to dissipate heat from the second package 204 to the first package 202 during a second time period. At or during the second time period, the TEC 210 is adapted to dissipate heat from the second die 242 to the first package 202. The heat that is dissipated from the second die 242 may pass through the solder balls 245, the second substrate 240 (which includes the dielectric layer 246, the interconnects 247), the TEC 210 and/or the first die 222. Thus, some of the heat from the second die 242 may heat the first die 222. [0049] In some implementations, the TEC 210 may be adapted to dissipate heat back and forth between the first package 202 and the second package 204 (e.g., back and forth between the first die 222 and the second die 242) to provide optimal die performance while still operating within thermal limits of the dies. For example, if the first die 222 has reached its thermal operating limit (e.g., temperature operating limit), the TEC 210 may be adapted and/or configured to dissipate heat away from the first die 222 and towards the second die 242 (as long as the second die has not reached its CA 02981824 2017-10-03 WO 2016/183099 PCT/US2016/031671 |EPO DP num="9"|thermal operating limit). Similarly, if the first die 222 is still within its thermal operating limit, but the second die 242 has reached its thermal operating limit, the TEC 210 may be adapted and/or configured to dissipate heat away from the second die 242 and towards the first die 222. Thus, the TEC 210 may be a bi-directional TEC that may be configured and/or adapted to dynamically (e.g., in real time during operation of the PoP device 200) dissipate heat back and forth between the first package 202 and the second package 204. Various examples of TECs in a device (e.g., PoP device) and how the TECs may be configured, adapted, and/or controlled for thermal management are further illustrated and described in detail below in at least FIGS. 6-12 and 16-18. [0050] In some implementations, a TEC (e.g., bi-directional TEC) may be located between two dies. An example of such a configuration is illustrated and described below in FIG. 21. Exemplary Package on Package (PoP) Device Comprising Bi-Directional Thermal Electric Cooler [0051] FIG. 5 illustrates an example of another package on package (PoP) device 500 that includes a first package 502 (e.g., first integrated device package), the second package 204 (e.g., second integrated device package), and the thermal electric cooler (TEC) 210. In some implementations, the PoP device 500 of FIG. 5 is similar to the PoP device 200, except that different types of interconnects are used to electrically couple the second package 204 to the first package 502. [0052] The first package 502 includes the first substrate 220, the first die 222, and the first encapsulation layer 224. The first package 502 may also include the TEC 210. The TEC 210 is coupled to the first die 222. The adhesive 270 (e.g., thermally conductive adhesive) may be used to couple the TEC 210 to the first die 222. The adhesive 270 may couple a first surface (e.g., bottom surface) of the TEC 210 to the back side of the first die 222. The TEC 210 may be a bi-directional TEC capable of dissipating heat in a first direction (e.g., in a first time period / frame) and a second direction (e.g., in a second time period / frame), where the second direction is opposite to the first direction. In some implementations, the TEC 210 may be a bi- directional TEC that may be configured and/or adapted to dynamically (e.g., in real time during operation of the PoP device 200) dissipate heat back and forth between the first package 502 and the second package 204, as described above for FIGS. 3-4. CA 02981824 2017-10-03 WO 2016/183099 PCT/US2016/031671 [0053] The first encapsulation layer 224 encapsulates at least part of the first die 222. The first encapsulation layer 224 may include a mold and/or an epoxy fill. The first encapsulation layer 224 may include several vias 510. The vias 510 may be through encapsulation vias (TEVs) or through mold vias (TMVs). The vias 510 are coupled to the interconnects 227. Several interconnects 512 are formed in the first encapsulation layer 224. The interconnects 512 may be redistribution interconnects. The interconnects 512 are coupled to the vias 510. A solder 520 (e.g., solder ball) is coupled to the interconnects 512 and the second substrate 240. The solder 520 is coupled to the interconnects 247 of the second substrate 240. Exemplary Thermal Electric Cooler (TEC) [0054] FIG. 6 illustrates a profile view of an example of thermal electric cooler (TEC) 600. The TEC 600 may be implemented in any packages and/or package on package (PoP) devices described in the present disclosure. For example, the TEC 600 may be the TEC 210 described above. [0055] The TEC 600 may be a bi-directional TEC. The TEC 600 may be a bi- directional heat transfer means. The TEC 600 includes an N-doped component 602 (e.g., N-doped semiconductor) and a P-doped component 604 (e.g., P-doped semiconductor), a carrier 606, an interconnect 612, and an interconnect 614. The carrier 606 may be optional. The TEC 600 may include several N-doped components 602 and several P-doped components 604. The TEC 600 may include several interconnects 612 and several interconnects 614. The interconnects 612 are located on a first side (e.g., bottom side) of the TEC 600. The interconnects 614 are located on a second side (e.g., top side) of the TEC 600. [0056] The N- doped component 602 is coupled to the P-doped component 604 through an interconnect. For example, the interconnect 614 is coupled to the N- doped component 602. The N-doped component 602 is coupled to the interconnect 612. The interconnect 612 is coupled to the P-doped component 604. The P-doped component 604 is coupled to another interconnect 614. The above pattern may be repeated several times to form the TEC 600. [0057] In some implementations, the TEC 600 may be configured and/or adapted to dissipate heat in a first direction and a second direction by providing a current through the TEC 600. Different polarities of the current that run through the TEC 600 may configure and/or adapt the TEC 600 differently. For example, a first current (e.g., first CA 02981824 2017-10-03 WO 2016/183099 PCT/US2016/031671 |EPO DP num="11"|current with a first polarity) that flows from the interconnect 614, the N- doped component 602, the interconnect 612, and the P-doped component 604 may configure the TEC 600 so that heat dissipates from the bottom side of the TEC 600 to the top side of the TEC 600. In such an instance, the bottom side of the TEC 600 is the cool side, and the top side of the TEC 600 is the hot side. [0058] When a second current (e.g., first current with a second polarity) flows from the P-doped component 604, the interconnect 612, the N-doped component 602, and the interconnect 614, the TEC 600 may be configured so that heat dissipates from the top side of the TEC 600 to the bottom side of the TEC 600. In such instance, the top side of the TEC 600 is the cool side, and the bottom side of the TEC 600 is the hot side. [0059] Thus, by changing the flow or polarity of the current (e.g., positive current, negative current) through the TEC 600, the TEC 600 may be configured as a bi- directional TEC that can be adapted to dissipate heat back and forth between the top side and the bottom side of the TEC 600. [0060] FIG. 7 illustrates an angled view of a conceptual TEC 600. The TEC 600 includes a first pad 702 (e.g., first terminal), a second pad 704 (e.g., second terminal), a dielectric layer 712, and a dielectric layer 714. The first pad 702 may be coupled to an interconnect (e.g., interconnect 614) or N-doped component (e.g., N-doped component 602). The second pad 704 may be coupled to an interconnect or P-doped component (e.g., P-doped component 604). The dielectric layers 712 and 714 surround the respective pads 702 and 704 to ensure that there is no shorting when the pads 702 and 704 are coupled to interconnects (e.g., solder) of a package. [0061] The first pad 702 and the second pad 704 may be located on different portions of the TEC 600. FIG. 7 illustrates that the first pad 702 and the second pad 704 are a first side (e.g., top side) of the TEC 600. However, in some implementations, the first pad 702 and/or the second pad 704 may be located on a second side (e.g., bottom side) of the TEC 600. The TEC 600 may be coupled to packages (e.g., die of a package, substrate of a package) by using one or more adhesives (e.g., thermally conductive adhesives). For example, a first adhesive may be coupled on a first side or a first surface of the TEC 600, and a second adhesive may be coupled on a second side or second surface of the TEC 600. [0062] In some implementations, a TEC may include several TECs. That is, a TEC may be an array of TECs that can be individually adapted and/or configured to dissipate heat in a particular direction. CA 02981824 2017-10-03 WO 2016/183099 PCT/US2016/031671 |EPO DP num="12"|[0063] FIG. 8 illustrates an angled view of a conceptual TEC 800 that includes several TECs. The TEC 800 is an array of TECs. As shown in FIG. 8, the TEC 800 includes a carrier 801, a first TEC 802, a second TEC 804, a third TEC 806, a fourth TEC 808, a fifth TEC 810, and a sixth TEC 812. The carrier 801 may be used to provide structural support for the individual TECs. The individual TECs (e.g., TEC 802) may be similar to the TEC 600. The TEC 800 may be implemented in any of the packages and/or PoP devices described in the present disclosure. [0064] The TEC 800 may be used to provide heat dissipation for one or more dies, and/or providing localized heat dissipation for a die. For example, a die may include hot spots and/or cool spots, and the TEC 800 may be used to only dissipate heat away from specific hot spot regions on the die. [0065] FIG. 9 illustrates an example of how an array of TECs may be configured and/or adapted to dissipate heat. As shown in FIG. 9, the TEC 800 is configured so that some TECs dissipate heat in one direction, while other TECs dissipate heat in another direction. In addition, some TECs may be inactive. When a TEC is inactive, the TEC may still passively conduct (e.g., passive heat conduction) heat from a hotter side to a cooler side. In the example of FIG. 9, the TEC 802 and the TEC 812 are configured and/or adapted to dissipate heat from a top side to a bottom side of the TEC 800. The TEC 806 and the TEC 808 are configured and/or adapted to dissipate heat from a bottom side to a top side of the TEC 800. The TEC 804 and the TEC 810 are inactive (off). The TEC 800 may be dynamically configured and/or adapted differently based on the temperatures (e.g., localized temperatures) of the die(s), as the die(s) are in operation. The TEC 800 may be coupled to one die or several dies. Exemplary Configurations of Device Comprising Thermal Electric Cooler(s) [0066] A thermal electric cooler (TEC) may be adapted and/or configured by one or more controllers in a device. FIG. 10 illustrates an example of a configuration of how one or more thermal electric coolers (TECs) 1000 may be controlled, configured and/or adapted to dissipate heat. The configuration includes the TECs 1000, a TEC controller 1002, a thermal controller 1004, and several temperature sensors 1006. The TECs 1000 may be a bi-directional heat transfer means. [0067] The temperature sensors 1006 may include at least one temperature sensor for a first die (e.g., logic die), and at least one temperature sensor for a second die (e.g., memory die). The temperature sensors 1006 may include other sensors for other dies. CA 02981824 2017-10-03 WO 2016/183099 PCT/US2016/031671 |EPO DP num="13"|The temperature sensors 1006 may be separate from their respective dies, or they may be integrated into their respective dies. The temperature sensors 1006 are in communication with the thermal controller 1004. The temperature sensors 1006 may transmit temperature readings to the thermal controller 1004. Thus, the thermal controller 1004 may receive temperature readings from the temperature sensors 1006. [0068] The thermal controller 1004 may be a separate device, unit, and/or die. The thermal controller 1004 may be configured to control and regulate operations of a TEC and/or dies so that the dies operate within their operational temperature limits. For example, the thermal controller 1004 may operate how and when an TEC is active (on) or inactive (off). The thermal controller 1004 may also control the performance of a die, by putting performance limitations on the die. For example, the thermal controller 1004 may limit the clock speed of a die in order to ensure that the die does not reach or exceed its maximum operating temperature. The thermal controller 1004 may control, configure, and/or adapt the TECs 1000 through the TEC controller 1002. However, the thermal controller 1004 may control, configure and/or adapt the TECs 1000 directly in some implementations. In some implementations, the TEC controller 1002 is part of the thermal controller 1004. The thermal controller 1004 may transmit signals and/or instruction to the TEC controller 1002 so that the TEC controller 1002 can control, adapt and/or configure the TECs 1000. [0069] The TEC controller 1002 may control, adapt and/or configured one or more TECs 1000 by transmitting one or more currents (e.g., first current, second current) to one or more TECs 1000. The property of the current (e.g., polarity of the current) that is transmitted to the TEC may configure how the TEC dissipates heat. For example, a first current having a first polarity (e.g., positive current) that is transmitted to a TEC may configure the TEC to dissipate heat in a first direction (e.g., bottom to top). A second current having a second polarity (e.g., negative current) that is transmitted to a TEC may configure the TEC to dissipate heat in a second direction (e.g., top to bottom), that is opposite to the first direction. Moreover, different amperes of current may transmitted to the different TECs 1000. For example, first TEC may be transmitted with a first current comprising a first ampere, while a second TEC may be transmitted with a second current comprising a second ampere. [0070] FIG. 10 further illustrates some of the variables that the thermal controller 1004 may take into account to control, adapt and/or configured one or more TECs 1000. As shown in FIG. 10, the thermal controller 1004 may receive an input of a temperature CA 02981824 2017-10-03 WO 2016/183099 PCT/US2016/031671 |EPO DP num="14"|of a first die (e.g., logic die) and compare it to the limit temperature (e.g., upper limit temperature) of the first die. The thermal controller 1004 may further weight the difference (if any) between the temperature of the first die and the limit temperature of the first die to control, adapt and/or configured one or more TECs 1000 associated with (e.g., coupled to) the first die. [0071] FIG. 10 also illustrates the thermal controller 1004 may receive an input of a temperature of a second die (e.g., memory die) and compare it to the limit temperature (e.g., upper limit temperature) of the second die. The thermal controller 1004 may further weight the difference (if any) between the temperature of the second die and the limit temperature of the second die to control, adapt and/or configured one or more TECs 1000 associated with (e.g., coupled to) the first die. [0072] In addition to temperature and/or temperature limits, other variables include the rate at which heat is being generated by the dies, the rate at which the temperature is increasing/decreasing in the dies, the source of the power to the packages (e.g., battery, plug-in source) and/or how much are the dies being utilized (e.g., percentage utilization of dies, clock speed). These variables may be weighed differently. [0073] The thermal controller 1004 may take into account the above various variables separately, independently, concurrently, and/or jointly. An example of how a thermal controller 1004 may take into account the various temperatures of the dies is illustrated and described in FIGS. 16-18. [0074] Different implementations may provide different configurations of a device that includes at least one TEC. FIG. 11 illustrates an example of another configuration of how one or more thermal electric coolers (TECs) 1000 may be controlled, configured and/or adapted to dissipate heat. The configuration of FIG. 11 includes the TEC 1000, a first die 1101, a TEC controller 1102, a thermal controller 1104, at least one first temperature sensor 1106, and at least one second temperature sensor 1108. [0075] The first die 1101 includes the thermal controller 1104 and the first temperature sensor 1106. The second temperature sensor 1108 may transmit temperature readings (e.g., temperature readings of a second die) to the first die 1101. More specifically, the second temperature sensor 1108 may transmit temperature readings to the thermal controller 1104. Similarly, the first temperature sensor 1106 may transmit temperature readings (e.g., temperature readings of the first die 1101) to the thermal controller 1104. Thus, the thermal controller 1104 may receive temperature readings from the first temperature sensor 1106 and the second temperature sensor CA 02981824 2017-10-03 WO 2016/183099 PCT/US2016/031671 1108. The thermal controller 1104 may be configured to control and regulate operations of a TEC and/or dies so that the dies operate within their operational temperature limits, in a similar manner as described for the thermal controller 1004. [0076] The first die 1101 and the thermal controller 1104 may transmit signals and/or instructions to the TEC controller 1102 so that the TEC controller 1102 can control, adapt and/or configure the TECs 1000. The TEC controller 1102 may control, adapt and/or configure the TECs 1000 by transmitting currents, in a similar manner as described for the TEC controller 1002. [0077] FIG. 11 also illustrates some of the variables that the first die 1201 and/or the thermal controller 1104 may take into account to control, adapt and/or configured one or more TECs 1000. The variables in FIG. 11 are similar to the variables described in FIG. 10, except that the variables may be taken into account by the first die 1201 and/or the thermal controller 1104. [0078] FIG. 12 illustrates an example of another configuration of how one or more thermal electric coolers (TECs) 1000 may be controlled, configured and/or adapted to dissipate heat. The configuration of FIG. 12 includes the TECs 1000, a first die 1201, a TEC controller 1202, the thermal controller 1104, at least one first temperature sensor 1106, and at least one second temperature sensor 1108. FIG. 12 is similar FIG. 11, except that the TEC controller 1202 is implemented in the first die 1201. Thus, the configuration of FIG. 12 operates in a similar manner the configuration of FIG. 11, except that the TEC controller 1202 operates within the first die 1201. [0079] FIG. 12 also illustrates some of the variables that the first die 1201 and/or the thermal controller 1104 may take into account to control, adapt and/or configured one or more TECs 1000. The variables in FIG. 12 are similar to the variables described in FIG. 10, except that the variables may be taken into account by the first die 1201 and/or the thermal controller 1104. [0080] It is noted that different implementations may provide different configurations and/or designs of the above TECs, TEC controller, thermal controller, and temperature sensors. Exemplary Connections of Thermal Electric Cooler (TEC) in a Package on Package (PoP) Device CA 02981824 2017-10-03 WO 2016/183099 PCT/US2016/031671 |EPO DP num="16"|[0081] FIGS. 13- 15 illustrate various examples of how a thermal electric cooler (TEC) in a package on package (PoP) device may be electrically coupled to various components or devices. [0082] FIG. 13 illustrates the PoP device 200 of FIG. 2. As shown in FIG. 13, the first die 222 is electrically coupled to the printed circuit board (PCB) 250 through a first set of interconnects 1302. The first set of interconnects 1302 may include a solder (from solder 225), interconnects (e.g., traces, vias, pads) from interconnects 227, and a solder ball (from solder balls 252). The first set of interconnects 1302 may provide an electrical path between the first die 222 a power source (not shown), a thermal controller (not shown), or a thermal electric cooler (TEC) controller (not shown). In some implementations, the thermal controller and/or the TEC controller may be implemented in the first die 222. [0083] FIG. 13 also illustrates the thermal electric cooler (TEC) 210 electrically coupled the PCB 250 through a second set of interconnects 1304. The second set of interconnect 1304 may be coupled to pads (e.g., pads 702, 704) and/or terminals on the TEC 210 as described in FIG. 7. The second set of interconnects 1304 may include a through substrate via (TSV) that traverses the first die 222, redistribution layers, a solder (from solder 225), interconnects (e.g., traces, vias, pads) from interconnects 227, and a solder ball (from solder balls 252). The second set of interconnects 1304 may provide an electrical path between the TEC 210 and a TEC controller (not shown). [0084] FIG. 14 illustrates how the TEC 210 may be electrically coupled to different components and/or device in the PoP device 200. As shown in FIG. 14, the first die 222 is electrically coupled to the printed circuit board (PCB) 250 through a first set of interconnects 1402. The first set of interconnects 1402 may include a solder (from solder 225), interconnects (e.g., traces, vias, pads) from interconnects 227, and a solder ball (from solder balls 252). The first set of interconnects 1402 may provide an electrical path between the first die 222 a power source (not shown), a thermal controller (not shown), or a thermal electric cooler (TEC) controller (not shown). In some implementations, the thermal controller and/or the TEC controller may be implemented in the first die 222. [0085] FIG. 14 also illustrates the thermal electric cooler (TEC) 210 electrically coupled the PCB 250 through a second set of interconnects 1404. The second set of interconnect 1404 may be coupled to pads (e.g., pads 702, 704) and/or terminals on the TEC 210 as described in FIG. 7. The second set of interconnects 1404 may include CA 02981824 2017-10-03 WO 2016/183099 PCT/US2016/031671 |EPO DP num="17"|interconnects from interconnect 247, solder 234, interconnects (e.g., traces, vias, pads) from interconnects 227, and a solder ball (from solder balls 252). The second set of interconnects 1404 may provide an electrical path between the TEC 210 and a TEC controller (not shown). In this example, the second set of interconnects 1404 traverses both the second package 204 and the first package 202. [0086] FIG. 15 illustrates the PoP device 200 of FIG. 5. As shown in FIG. 15, the first die 222 is electrically coupled to the printed circuit board (PCB) 250 through a first set of interconnects 1502. The first set of interconnects 1502 may include a solder (from solder 225), interconnects (e.g., traces, vias, pads) from interconnects 227, and a solder ball (from solder balls 252). The first set of interconnects 1502 may provide an electrical path between the first die 222 a power source (not shown), a thermal controller (not shown), or a thermal electric cooler (TEC) controller (not shown). In some implementations, the thermal controller and/or the TEC controller may be implemented in the first die 222. [0087] FIG. 15 also illustrates the thermal electric cooler (TEC) 210 electrically coupled the PCB 250 through a second set of interconnects 1504. The second set of interconnect 1504 may be coupled to pads (e.g., pads 702, 704) and/or terminals on the TEC 210 as described in FIG. 7. The second set of interconnects 1504 may include interconnects from interconnect 512 (e.g., redistribution interconnects), a via (e.g., through mold via (TMV), through encapsulation via (TEV)) from vias 510, interconnects (e.g., traces, vias, pads) from interconnects 227, and a solder ball (from solder balls 252). The second set of interconnects 1504 may provide an electrical path between the TEC 210 and a TEC controller (not shown). Exemplary Illustration of How The Operation of a Thermal Electric Cooler (TEC) May Affect The Temperatures of Dies [0088] FIG. 16 illustrates three graphs of how the operation of a thermal electric cooler (TEC) may affect the temperatures of various dies. FIG. 16 illustrates a first graph 1602, a second graph 1604, and a third graph 1606. The first graph 1602 is a temperature reading of a first die (e.g., during operation of the first die 222) over a time period. The second graph 1604 is a temperature reading of a second die (e.g., during operation of the second die 242) over a time period. The third graph 1606 is current reading that is transmitted to / received by the thermal electric cooler (TEC) (e.g., TEC 210) over a time period. CA 02981824 2017-10-03 WO 2016/183099 PCT/US2016/031671 |EPO DP num="18"|[0089] During the time period A, both the first die and the second die are operational. As time passes, the temperatures of the first die and second die increases. Since both the first die and the second die have operating temperatures that are respectively less than their maximum temperatures (e.g., maximum operating temperatures, first maximum temperature, second maximum temperature), the TEC does not have to operational / active. Thus, no current is transmitted to the TEC or received by the TEC. [0090] At the end of the time period A, the second die has reached its maximum operating temperature (e.g., TDIE2). However, the first die has not reached its maximum operating temperature (e.g., TDIE1) at the end of the time period A. Thus, heat can be dissipated away from the second die towards the first die. A current (e.g., first current having a first polarity) is transmitted to and received by the TEC, which causes the TEC to dissipate heat away from the second die. The first polarity may be a positive polarity. [0091] During the time period B, after the TEC is activated and while the TEC is active, the temperature of the second die begins to decrease, while the temperature of the first die increases at a faster rate (due to the heat from the second die being transferred to the first). Since the first die is operational, the first die is generating its own heat, while at the same time, the first die is receiving heat from the second die. [0092] At the end of the time period B, the first die has reached its maximum operating temperature, while the second die is now below its maximum operating temperature. In this instance, heat can be dissipated away from the first die and towards the second die. A current with a different polarity (e.g., opposite polarity, second polarity) is transmitted to and received by the TEC. The second polarity may be a negative polarity. The new polarity of the current causes the TEC to dissipate heat away from the first die and towards the second die. [0093] During the time period C, while the TEC is active with a current with a new polarity, the temperature of the first die begins to decrease, while the temperature of the second begins to increase (due to heat generated from the second die and the heat that is transferred from the first die). [0094] At the end of the time period C, the second die has reached it maximum operating temperature, while the first die is now below its maximum operating temperature. The current that is transmitted to and received by the TEC has now been changed back to another polarity (e.g., first polarity, positive polarity), which causes the TEC to again dissipate heat away from the second die. CA 02981824 2017-10-03 WO 2016/183099 PCT/US2016/031671 |EPO DP num="19"|[0095] During the time period D, the temperature of the second die begins to decrease, while the temperature of the first die increases. [0096] Thus, by changing the current that is transmitted to and received by the TEC, the temperatures of the dies may be dynamically controlled without having to throttle the performance of the dies. However, in some implementations, the thermal management and/or control of the dies may be achieved through a combination of limiting the performance of the dies (e.g., throttling one or more dies) and the use of at least one TEC. It is noted that the different implementations may use different currents with different values and polarity to activate, configure and adapt the TEC to dissipate heat. [0097] Having described an example of how thermal management of dies may be achieved by using at least one TEC, several methods for thermal management of dies that includes at least one TEC will now be described in the next sections. In some implementations, the thermal management of the dies may include limiting the performance of one or more dies. Exemplary Flow Diagram of Method for Thermal Management of Dies By Using a Thermal Electric Cooler [0098] FIG. 17 illustrates an exemplary flow diagram of a method 1700 for thermal management of two or more dies by using at least one thermal electric cooler (TEC). The method 1700 may be performed by a TEC controller and/or a thermal controller. [0099] The TEC may be active (e.g., on) or inactive (off) before the method 1700. The method receives (at 1705) temperature(s) (e.g., first temperature reading, second temperature reading) of a first die and temperature(s) of a second die. The first die may be the first die 222. The second die may be the second die 242. The temperatures may be temperature readings from at least one first temperature sensor for the first die, and temperature readings from at least one second temperature sensor for the second die. [00100] The method determines (at 1710) whether the temperature of the first die is equal or greater than a maximum threshold operating temperature of the first die. For example, if the maximum threshold operating temperature of the first die is 100 F, the method determines whether the temperature of the first die is equal or greater than 100 F. In instances where there are multiple temperatures (e.g., localized temperatures) for the first die, the method may make several determinations. CA 02981824 2017-10-03 WO 2016/183099 PCT/US2016/031671 [00101] When the method determines (at 1710) that the temperature of the first die is not equal or greater than the maximum threshold operating temperature of the first die, the method proceeds to determine (at 1715) whether the temperature of the second die is equal or greater than a maximum threshold operating temperature of the second die. For example, if the maximum threshold operating temperature of the second die is 85 F, the method determines whether the temperature of the second die is equal or greater than 85 F. In instances where there are multiple temperatures (e.g., localized temperatures) for the second die, the method may make several determinations. [00102] When the method determines (at 1715) that the temperature of the second die is not equal or greater than maximum threshold operating temperature of the second die, the method proceeds to instruct (at 1720) the TEC to be inactive (e.g., off). In some implementations, instructing the TEC to be inactive includes not transmitting a current to the TEC. If the TEC is already inactive, then there is no current being transmitted to the TEC. The method then proceeds to determine (at 1725) whether to continue with the thermal management of the dies. [00103] However, referring back to 1715, when the method determines (at 1715) that the temperature of the second die is equal or greater than maximum threshold operating temperature of the second die, the method proceeds to configure (at 1730) and/or adapt the TEC to dissipate heat away from the second die. In such instances, the method may configure and/or adapt the TEC to dissipate heat in a first direction (e.g., direction away from the second die), towards the first die. This may include sending a first current having a first polarity (e.g., positive polarity) to the TEC. The method then proceeds to determine (at 1725) whether to continue with the thermal management of the dies. [00104] Referring back to 1710, when the method determines (at 1710) that the temperature of the first die is equal or greater than the maximum threshold operating temperature of the first die, the method proceeds to determine (at 1735) whether the temperature of the second die is equal or greater than a maximum threshold operating temperature of the second die. In instances where there are multiple temperatures (e.g., localized temperatures) for the second die, the method may make several determinations. [00105] When the method determines (at 1735) that the temperature of the second die is equal or greater than maximum threshold operating temperature of the second die, the method proceeds to configure (at 1740) the TEC to be inactive (e.g., off). In this instance, both the first dies and the second die have temperatures that are greater than CA 02981824 2017-10-03 WO 2016/183099 PCT/US2016/031671 |EPO DP num="21"|their respective maximum threshold temperatures, using the TEC would be not be productive. In such instances, throttling the performance of one or more of the dies (e.g., limiting the clock speed of the dies) may be used to reduce the temperatures of the dies. In some implementations, instructing the TEC to be inactive includes not transmitting a current to the TEC. If the TEC is already inactive, then there is no current being transmitted to the TEC. The method then proceeds to determine (at 1725) whether to continue with the thermal management of the dies. [00106] However, referring back to 1735, when the method determines (at 1735) that the temperature of the second die is not equal or greater than maximum threshold operating temperature of the second die, the method proceeds to configure (at 1745) and/or adapt the TEC to dissipate heat away from the first die. In such instances, the method may configure and/or adapt the TEC to dissipate heat in a second direction (e.g., direction away from the first die), towards the second die. This may include sending a second current having a second polarity (e.g., negative polarity) to the TEC. The method then proceeds to determine (at 1725) whether to continue with the thermal management of the dies. [00107] The method determines (at 1725) whether to continue with the thermal management of the dies. If so, the method proceeds back to receive (at 1705) temperature(s) of the first die and temperature(s) of the second die. [00108] However, when the method determines (at 1725) not to continue with the thermal management of the dies, the method proceeds to configure (at 1745) the TEC to be inactive (e.g., off). This may be achieved by discontinuing transmitting any current to the TEC. Exemplary Flow Diagram of Method for Thermal Management of Dies By Using a Thermal Electric Cooler and/or Performance Limitations on the Dies [00109] FIG. 18 illustrates an exemplary flow diagram of another method 1800 for thermal management of two or more dies by using at least one thermal electric cooler (TEC) and/or performance limitations on the dies. The method 1800 may be performed by a TEC controller and/or a thermal controller. [00110] The TEC may be active (e.g., on) or inactive (off) before the method 1800. The method receives (at 1805) temperature(s) (e.g., first temperature reading, second temperature reading) of a first die and temperature(s) of a second die. The first die may be the first die 222. The second die may be the second die 242. The temperatures may CA 02981824 2017-10-03 WO 2016/183099 PCT/US2016/031671 |EPO DP num="22"|be temperature readings from at least one first temperature sensor for the first die, and temperature readings from at least one second temperature sensor for the second die. [00111] The method determines (at 1810) whether the temperature of the first die is equal or greater than a maximum threshold operating temperature of the first die, and the temperature of the second die is equal or greater than a maximum threshold operating temperature of the second die. In instances where there are multiple temperatures (e.g., localized temperatures) for the first die and/or the second die, the method may make several determinations. [00112] When the method determines (at 1810) that both the temperature of the first die is equal or greater than a maximum threshold operating temperature of the first die, and the temperature of the second die is equal or greater than a maximum threshold operating temperature of the second die, the method limits (at 1815) the performance of the first die and/or the second die. In some implementations, limiting the performance of the dies may include throttling the die, such as limiting the maximum clock speeds of one or more dies. Different implementations, may limit the performance of the dies differently. For example, the performance of the first die may be limited more than the performance of the second die. [00113] The method then proceeds to receive (at 1805) temperature(s) of a first die and temperature(s) of a second die. [00114] However, when the method determines (at 1810) that the temperature of the first die is not equal or greater than a maximum threshold operating temperature of the first die, and the temperature of the second die is not equal or greater than a maximum threshold operating temperature of the second die, then method may optionally remove or reduce (at 1820) any limitations on the performances of the first die and/or the second die. [00115] The method determines (at 1825) whether the temperature of the first die is equal or greater than a maximum threshold operating temperature of the first die, or the temperature of the second die is equal or greater than a maximum threshold operating temperature of the second die. In instances where there are multiple temperatures (e.g., localized temperatures) for the first die and/or the second die, the method may make several determinations. [00116] When the method determines (at 1825) that the temperature of the first die is equal or greater than a maximum threshold operating temperature of the first die, or the temperature of the second die is equal or greater than a maximum threshold operating CA 02981824 2017-10-03 WO 2016/183099 PCT/US2016/031671 |EPO DP num="23"|temperature of the second die, the method activates (at 1830) a thermal electric cooler (TEC). This may include sending a current to the TEC. The TEC may be activated to either dissipate heat away from the first die or away from the second die. For example, when the temperature of the first die is equal or greater than a maximum threshold operating temperature of the first die, but the temperature of the second die is not equal or greater than a maximum threshold operating temperature of the second die, the TEC may be activated to dissipate heat away from the first die. When the temperature of the first die is not equal or greater than a maximum threshold operating temperature of the first die, but the temperature of the second die is equal or greater than a maximum threshold operating temperature of the second die, the TEC may be activated to dissipate heat away from the second die. An example of how a TEC may be activated is illustrated and described in FIG. 17. The method then proceeds to receive (at 1805) temperature(s) of the first die and temperature(s) of the second die. [00117] When the method determines (at 1825) that the temperature of the first die is not equal or greater than a maximum threshold operating temperature of the first die, and the temperature of the second die is not equal or greater than a maximum threshold operating temperature of the second die, the method deactivates (at 1835) the thermal electric cooler (TEC). Deactivating the TEC may include not transmitting a current to the TEC. When the TEC is already inactive, no current is transmitted either. It is noted that in some implementations, the same current or different currents (e.g., current with different amperes) may be transmitted. In some implementations, a stronger current (e.g., current with a greater ampere) will provide greater active heat dissipation than a weaker current (e.g., current with a lower ampere). Different implementations may use different factors and/or variables to consider the strength of the current. Such factors and/or variables may include the source of the power of the package (e.g., battery power, plug-in power) and/or rate of temperature change of the dies. [00118] The method of 1800 may be iterated several times until thermal management of the dies ends. Exemplary Sequence for Providing / Fabricating a Package on Package (PoP) Device Comprising Bi-Directional Thermal Electric Cooler (TEC) [00119] In some implementations, providing / fabricating a package on package (PoP) device that includes at least one bi-directional thermal electric cooler (TEC) includes several processes. FIG. 19 (which includes FIGS. 19A-19B) illustrates an CA 02981824 2017-10-03 WO 2016/183099 PCT/US2016/031671 |EPO DP num="24"|exemplary sequence for providing / fabricating a PoP device that includes at least one bi-directional thermal electric cooler (TEC). In some implementations, the sequence of FIGS. 19A-19C may be used to provide / fabricate the PoP device of FIGS. 2-5 and/or other PoP devices described in the present disclosure. [00120] It should be noted that the sequence of FIGS. 19A-19C may combine one or more stages in order to simplify and/or clarify the sequence for providing / fabricating a PoP device that includes a bi-directional thermal electric cooler (TEC). In some implementations, the order of the processes may be changed or modified. [00121] Stage 1, as shown in FIG. 19A, illustrates a state after a substrate 1900 is provided. The substrate 1900 may be a package substrate. The substrate 1900 may be fabricated or supplied by a supplier or manufacturer. The substrate 1900 includes at least one dielectric layer 1902, a set of interconnects 1904 (e.g., traces, vias, pads), a first solder resist layer 1906 and a second solder resist layer 1908. The dielectric layer 1902 may include a core layer and/or a prepeg layer. [00122] Stage 2 illustrates a state after a first die 1910 is coupled (e.g., mounted) to the substrate 1900. The first die 1910 is coupled to the substrate 1900 through a set of solder 1912 (e.g., solder balls). Different implementations may couple the first die 1910 to the substrate 1900 differently. In some implementations, the first die 1910 is coupled to the substrate 1900 through a set of pillars and solder. [00123] Stage 3 illustrates a state after an encapsulation layer 1920 is provided (e.g., formed) on the substrate 1900 and the first die 1910. The encapsulation layer 1920 may encapsulate the entire first die 1910 or just part of the first die 1910. The encapsulation layer 1920 may be a mold and/or epoxy fill. [00124] Stage 4 illustrates a state after at least one cavity 1921 is formed in the encapsulation layer 1920. Different implementations may form the cavity 1921. In some implementations, a laser is used to form the cavity 1921. In some implementations, the encapsulation layer 1920 is a photo-pattemable layer, and the cavity 1921 can be formed by using a photo-lithography process (e.g., photo-etching process) to pattern the encapsulation layer 1920. [00125] Stage 5 illustrates a state after at least one via 1922 and at least one interconnect 1924 are formed in and on the encapsulation layer 1920. A plating process may be used to form the via 1922 and the interconnect 1924. The interconnect 1924 may include a trace and/or a pad. The interconnect 1924 may be a redistribution CA 02981824 2017-10-03 WO 2016/183099 PCT/US2016/031671 interconnect. The via 1922 and the interconnect 1924 may each include a seed metal layer and metal layer. [00126] Stage 6, as shown in FIG. 19B, illustrates a state after a thermal electric cooler (TEC) 1940 is coupled (e.g., mounted) to the first die 1910. In some implementations, an adhesive (e.g., thermally conductive adhesive) is used to couple the TEC 1940 to the first die 1910. The TEC 1940 may be a bi-directional TEC. The TEC 1940 includes pads and/or terminals (e.g., as described in FIG. 7). The TEC 1940 may coupled to the first die 1910 such that the pads and/or terminals of the TEC 1940 are coupled (e.g., electrically coupled) to interconnects on the encapsulation layer 1920 (e.g., redistribution interconnects, interconnect from interconnects 1924). Stage 6 may illustrate a first package 1950 that includes the substrate 1900, the first die 1910, and the encapsulation layer 1920. The first package 1950 may also include the TEC 1940. [00127] Stage 7 illustrates a state after a second package 1960 is coupled (e.g., mounted) to the first package 1950, such that the TEC 1940 is between the first package 1950 and the second package 1960. The second package 1960 includes a second substrate 1970 (e.g., package substrate), a second die 1980, and a second encapsulation layer 1982. The second substrate 1970 includes at least one dielectric layer 1972 and a set of interconnects 1974 (e.g., traces, pads, vias). A set of solder balls 1976 may be coupled to the second substrate 1970 and interconnects (e.g., interconnect 1924) from the first package 1950. The second die 1980 is coupled (e.g., mounted) to the second substrate 1970 through a set of solder 1984 (e.g., solder balls). As shown at stage 7, the TEC 1940 is located between the first die 1910 and the second substrate 1970. In some implementations, an adhesive (e.g., thermal conductive adhesive) is used to couple the second substrate 1970 to the TEC 1940. [00128] Stage 8 illustrates a state after a set of solder balls 1990 is coupled to the first package 1950. Stage 8 may include a package on package (PoP) device 1994, which includes the first package 1950, the second package 1960 and the TEC 1940. Exemplary Method for Providing / Fabricating a Package on Package (PoP) Device Comprising Bi-Directional Thermal Electric Cooler (TEC) [00129] FIG. 20 illustrates an exemplary flow diagram of a method 2000 for providing / fabricating a package on package (PoP) device that includes at least one bi- directional thermal electric cooler (TEC). In some implementations, the method 2000 of CA 02981824 2017-10-03 WO 2016/183099 PCT/US2016/031671 |EPO DP num="26"|FIG. 20 may be used to provide / fabricate the PoP device of FIGS. 2-5 and/or other PoP devices in the present disclosure. [00130] It should be noted that the flow diagram of FIG. 20 may combine one or more step and/or processes in order to simplify and/or clarify the method for providing a PoP device that includes a bi-directional TEC. In some implementations, the order of the processes may be changed or modified. [00131] The method provides (at 2005) a substrate. The substrate may be a package substrate. The substrate may be fabricated or supplied by a supplier or manufacturer. The substrate includes at least one dielectric layer, a set of interconnects (e.g., traces, vias, pads), a first solder resist layer and a second solder resist layer. The dielectric layer may include a core layer and/or a prepeg layer. [00132] The method couples (at 2010) a first die to the substrate. The first die may be coupled (e.g., mounted) to the substrate through a set of solder (e.g., solder balls). Different implementations may couple the first die to the substrate differently. In some implementations, the first die is coupled to the substrate through a set of pillars and solder. [00133] The method optionally provides (at 2015) an encapsulation layer on the substrate and the first die. In some implementations, providing the encapsulation layer includes forming the encapsulation layer on the substrate and the first die such that the encapsulation layer encapsulates the entire first die or just part of the first die. The encapsulation layer may be a mold and/or epoxy fill. [00134] The method forms (at 2020) interconnects in and on the encapsulation layer. In some implementations, forming interconnects includes forming cavities in the encapsulation layer and forming interconnects in the cavity and/or the encapsulation layer. Different implementations may form the cavities. In some implementations, a laser is used to form the cavities. In some implementations, the encapsulation layer is a photo-patternable layer, and the cavities may be formed by using a photo- lithography process (e.g., photo-etching process) to pattern the encapsulation layer. [00135] Forming the interconnects may include forming at least one via and at least one interconnect in and on the encapsulation layer 1920. A plating process may be used to form the vias and the interconnects. The interconnects may include a trace and/or a pad. The interconnects may be a redistribution interconnect. The vias and the interconnects may each include a seed metal layer and metal layer. CA 02981824 2017-10-03 WO 2016/183099 PCT/US2016/031671 |EPO DP num="27"|[00136] The method couples (at 2025) a thermal electric cooler (TEC) to the first die. In some implementations, an adhesive (e.g., thermally conductive adhesive) is used to couple (e.g., mount) the TEC to the first die. The TEC may be a bi-directional TEC. A first package may be defined by the first substrate, the first die, the encapsulation layer. The first package may also include the TEC coupled to the first die. [00137] The method couples (at 2030) a second package to the first package, such that the TEC is between the first package and the second package. The second package includes a second substrate (e.g., package substrate), a second die, and a second encapsulation layer. The second substrate includes at least one dielectric layer and a set of interconnects (e.g., traces, pads, vias). A set of solder balls may be coupled to the second substrate and interconnects from the first package. The TEC is located between the first die (of the first package) and the second substrate (of the second package). In some implementations, an adhesive (e.g., thermal conductive adhesive) is used to couple the second substrate to the TEC. [00138] The method provides (at 2035) a set of solder balls to the first package. More specifically, the set of solder balls may be coupled to the first substrate of the first package. Exemplary Package on Package (PoP) Device Comprising Bi-Directional Thermal Electric Cooler [00139] FIG. 21 illustrates an example of another package on package (PoP) device 2100 that includes a first package 2102 (e.g., first integrated device package), a second package 2104 (e.g., second integrated device package), a first thermal electric cooler (TEC) 2110, and a second TEC 2112. In some implementations, the first thermal electric cooler (TEC) 2110 and the second TEC 2112 may be configured as an assembly or an array of TECs, as described in FIGS. 8-9. [00140] The first package 2102 includes a first substrate 2120, a first die 2122 (e.g., first logic die), a second die 2123 (e.g., second logic die), and a first encapsulation layer 2124. The first substrate 2120 includes at least one dielectric layer 2126 and a set of interconnects 2127. The first package 2102 may also include the first TEC 2110 and the second TEC 2112. The first TEC 2110 is coupled to the first die 2122. The second TEC 2112 is coupled to the second die 2123. An adhesive (e.g., thermally conductive adhesive) may be used to couple the TECs (e.g., first TEC 2110) to the first dies (e.g., die 2122). CA 02981824 2017-10-03 WO 2016/183099 PCT/US2016/031671 |EPO DP num="28"|[00141] The second package 2104 is coupled (e.g., mounted) to the first package 2102, such that the first TEC 2110 and the second TEC 2112 are between the first package 2102 and the second package 2104. The second package 2104 includes a second substrate 2140, a first die 2142, a second die 2143, a first encapsulation layer 2144, and a third TEC 2150. The second substrate 2140 includes at least one dielectric layer 2146 and a set of interconnects 2147. The first TEC 2110 is between the first die 2122 and the second substrate 2140. The second TEC 2112 is between the second die 2123 and the second substrate 2140. The third TEC 2150 is between the first die 2142 and the second die 2143. [00142] The first TEC 2110 may be a bi-directional TEC capable of dissipating heat in a first direction (e.g., in a first time period / frame) and a second direction (e.g., in a second time period / frame), where the second direction is opposite to the first direction. Similarly, the second TEC 2112 may be a bi-directional TEC capable of dissipating heat in a first direction (e.g., in a first time period / frame) and a second direction (e.g., in a second time period / frame), where the second direction is opposite to the first direction. The third TEC 2150 may be a bi-directional TEC capable of dissipating heat in a first direction (e.g., in a first time period / frame) and a second direction (e.g., in a second time period / frame), where the second direction is opposite to the first direction. [00143] In some implementations, the TECs 2110 and 2112 may be bi-directional TECs that may be configured and/or adapted to dynamically (e.g., in real time during operation of the PoP device 2100) dissipate heat back and forth between the first package 2102 and the second package 2104, as described in FIGS. 3-4. [00144] In some implementations, the TECs 2110 and 2112 may be bi-directional TECs that may be configured and/or adapted to dynamically (e.g., in real time during operation of the PoP device 2100) dissipate heat back and forth between the first die 2122 and the second die 2123. That is, the TECs 2110 and 2112 may be configured such that heat that is dissipated away from the first die 2122 may be dissipated towards the second die 2123. Thus, in some implementations, the TECs 2110 and 2112 may be configured so that heat dissipates from the first die 2122, through the first TEC 2110, the second substrate 2140, the second TEC 2112, and to the second die 2123. [00145] In some implementations, the TECs 2110 and 2112 may be configured such that heat that is dissipated away from the second die 2123 may be dissipated towards the first die 2122. Thus, in some implementations, the TECs 2110 and 2112 may be CA 02981824 2017-10-03 WO 2016/183099 PCT/US2016/031671 |EPO DP num="29"|configured so that heat dissipates from the second die 2123, through the second TEC 2112, the second substrate 2140, the first TEC 2110, and to the first die 2122. [00146] In some implementations, the TEC 2150 may be a bi-directional TEC that may be configured and/or adapted to dynamically (e.g., in real time during operation of the PoP device 2100) dissipate heat back and forth between the first die 2142 and the second die 2143. That is, for example, the TEC 2150 and 2150 may be configured such that heat that is dissipated away from the first die 2142 may be dissipated towards the second die 2143. Different implementations may configure the TECS differently to achieve a desired thermal management of the dies in the PoP device 2100. Exemplary Electronic Devices [00147] FIG. 22 illustrates various electronic devices that may be integrated with any of the aforementioned integrated device, semiconductor device, integrated circuit, die, interposer, package or package-on-package (PoP). For example, a mobile phone device 2202, a laptop computer device 2204, and a fixed location terminal device 2206 may include an integrated device 2200 as described herein. The integrated device 2200 may be, for example, any of the integrated circuits, dies, integrated devices, integrated device packages, integrated circuit devices, package-on-package devices described herein. The devices 2202, 2204, 2206 illustrated in FIG. 22 are merely exemplary. Other electronic devices may also feature the integrated device 2200 including, but not limited to, a group of devices (e.g., electronic devices) that includes mobile devices, hand- held personal communication systems (PCS) units, portable data units such as personal digital assistants, global positioning system (GPS) enabled devices, navigation devices, set top boxes, music players, video players, entertainment units, fixed location data units such as meter reading equipment, communications devices, smartphones, tablet computers, computers, wearable devices, servers, routers, electronic devices implemented in automotive vehicles (e.g., autonomous vehicles), or any other device that stores or retrieves data or computer instructions, or any combination thereof [00148] One or more of the components, steps, features, and/or functions illustrated in FIGS. 2, 3,4, 5, 6, 7, 8, 19, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19A-19B, 20,21 and/or 22 may be rearranged and/or combined into a single component, step, feature or function or embodied in several components, steps, or functions. Additional elements, components, steps, and/or functions may also be added without departing from the disclosure. It should also be noted that FIGS. 2, 3, 4, 5, 6, 7, 8, 19, 10, 11, 12, 13, 14, CA 02981824 2017-10-03 WO 2016/183099 PCT/US2016/031671 15, 16, 17, 18, 19A-19B, 20, 21 and/or 22 and its corresponding description in the present disclosure is not limited to dies and/or ICs. In some implementations, FIGS. 2, 3, 4, 5, 6, 7, 8, 19, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19A-19B, 20, 21 and/or 22 and its corresponding description may be used to manufacture, create, provide, and/or produce integrated devices. In some implementations, a device may include a die, a die package, an integrated circuit (IC), an integrated device, an integrated device package, a wafer, a semiconductor device, a package on package (PoP) device, and/or an interposer. [00149] The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any implementation or aspect described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects of the disclosure. Likewise, the term "aspects" does not require that all aspects of the disclosure include the discussed feature, advantage or mode of operation. The term "coupled" is used herein to refer to the direct or indirect coupling between two objects. For example, if object A physically touches object B, and object B touches object C, then objects A and C may still be considered coupled to one another¨even if they do not directly physically touch each other. [00150] Also, it is noted that the embodiments may be described as a process that is depicted as a flowchart, a flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination corresponds to a return of the function to the calling function or the main function. Any of the above methods and/or processes may also be code that is stored in a computer / processor readable storage medium that can be executed by at least one processing circuit, processor, die and/or controller (e.g., TEC controller, thermal controller). For example, the die, the TEC controller, and/or the thermal controller may include one or more processing circuits that may execute code stored in a computer / processor readable storage medium. A computer / processor readable storage medium may include a memory (e.g., memory die, memory in a logic die, memory in TEC controller, memory in thermal controller). A die may be implemented as a flip chip, a wafer level package (WLP), and/or a chip scale package (CSP). CA 02981824 2017-10-03 WO 2016/183099 PCT/US2016/031671 |EPO DP num="31"|[00151] Those of skill in the art would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. [00152] The various features of the disclosure described herein can be implemented in different devices and/or systems without departing from the disclosure. It should be noted that the foregoing aspects of the disclosure are merely examples and are not to be construed as limiting the disclosure. The description of the aspects of the present disclosure is intended to be illustrative, and not to limit the scope of the claims. As such, the present teachings can be readily applied to other types of apparatuses and many alternatives, modifications, and variations will be apparent to those skilled in the art. |
A system and method transmits graphic data received at varying frequencies at a fixed data rate. The frequency dependent data and associated data clock signal are received and the frequency dependent data is converted to frequency independent data. A ratio of a number of data clock cycles to a number of reference clock cycles is determined and transmitted. The frequency independent data and header data are transmitted, at a fixed rate, to a receiver, the fixed rate being a frequency greater than the frequency of the associated data clock signal. The received the frequency independent data is converted to frequency dependent data based upon the received determined ratio. The communication channel may include an optical fiber and a tension member wherein control data is transmitted along the tension member and graphic data is transmitted along the optical fiber. |
1.A method for transmitting varying frequency related data, comprising:(a) receiving frequency related data and associated data clock signals;(b) converting frequency dependent data into frequency independent data;(c) determining the ratio of the number of data clock cycles to the number of reference clock cycles;(d) transmitting the determined ratio;(e) transmitting frequency independent data and header data to the receiver at a fixed rate, the fixed rate being a frequency greater than the frequency of the associated data clock signal;(f) receiving frequency independent data and the determined ratio;(g) Converting frequency independent data into frequency dependent data based on the determined ratios received.2.The method of claim 1 further comprising:(h) When there is no frequency independent data transmission, the idle code is sent to the receiver to maintain the constant data stream to the receiver.3.The method of claim 1 further comprising:(h) When there is no header data transmission, the idle code is transmitted to the receiver to maintain a constant data stream to the receiver.4.The method of claim 1 further comprising:(h) When there is no header data and frequency independent data transmission, the idle code is transmitted to the receiver to maintain a constant data stream to the receiver.5.A method of transmitting varying frequency related data, including:(a) receiving frequency related data having a preset resolution format associated therewith;(b) determining timing information from the received frequency related data;(c) converting the received frequency related data into frequency independent data;(d) encoding the frequency independent data with the determined timing information;(e) transmitting frequency independent data encoding the timing information to the receiver at a fixed rate;(f) receiving frequency independent data encoded by the timing information;(g) extracting timing information from frequency independent data encoded by the timing information;(h) Regenerating frequency related data having a preset resolution associated therewith based on the extracted timing information.6.The method of claim 5, further comprising:(i) When frequency independent data transmission without timing information coding is performed, the idle code is transmitted to the receiver to maintain the transmission of the constant data stream to the receiver.7.A component for transmitting graphics data generated by a graphics data source to a display device, including:a circuit for receiving, from a graphics data source, frequency dependent data having a resolution format and a data clock frequency associated therewith and generating frequency independent data from which the timing information is encoded;And a transmitter that transmits the frequency independent data encoded by the timing information to the display device at a fixed rate.8.The component of claim 7 wherein said transmitter transmits the idle code to the display device to maintain a constant data stream to the display device when frequency independent data encoding without timing information is transmitted.9.The component of claim 7 wherein the received frequency related data has a predetermined resolution format.10.A system for transmitting graphical data generated by a graphical data source to a display device, comprising:Communication channela first circuit, the first circuit receiving, from a graphics data source, frequency dependent data having a resolution format and a data clock frequency associated therewith and generating frequency independent data from which the timing information is encoded;a first transmitter operatively coupled to the communication channel to transmit time-series data encoded with timing information at a fixed rate;a second circuit operatively coupled to the communication channel to receive frequency independent data encoded by timing information;a third circuit operatively coupled to the second circuit to extract timing information from the frequency independent data encoded by the timing information and to regenerate the frequency having the preset resolution associated therewith based on the extracted timing information Related data;a second transmitter operatively coupled to the third circuit to transmit frequency related data having a predetermined resolution associated therewith to the display device.11.The system of claim 10 wherein said communication channel is an optical fiber and said first transmitter optically transmits frequency independent data encoded by timing information at a fixed rate.12.The system of claim 10 wherein said first transmitter transmits an idle code to maintain transmission of a constant data stream when frequency independent data encoded without sequence information is transmitted.13.The system of claim 10 wherein the received frequency related data has a predetermined resolution format.14.A system for transmitting graphical data generated by a graphical data source to a display device, comprising:Communication channelThe communication channel includes:optical fiber;Enveloping the optical fiber to protect the sheath of the optical fiber;a tensile member located within the sheath to provide tensile strength to the optical fiber;a first circuit, the first circuit receiving, from a graphics data source, frequency dependent data having a preset resolution format and a data clock frequency associated therewith, and generating timing information and frequency independent data therefrom;a first transmitter operatively coupled to the communication channel to transmit timing information along the tension member at a fixed rate and to transmit frequency independent data along the optical fiber at a fixed rate;a second circuit operatively coupled to the communication channel to receive timing information and frequency independent data;a third circuit operatively coupled to the second circuit to extract frequency dependent data having a preset resolution associated therewith based on the received timing information;a second transmitter operatively coupled to the third circuit to transmit frequency related data having a preset resolution associated therewith to the display device.15.The system of claim 14 wherein said first transmitter transmits an idle code to maintain transmission of a constant data stream when frequency independent data encoded without timing information is transmitted.16.A method of transmitting varying frequency related data, including:(a) receiving frequency related data and associated data clock signals;(b) converting frequency dependent data into frequency independent data;(c) determining the ratio of the number of data clock cycles to the number of reference clock cycles;(d) transmitting the determined ratio;(e) transmitting frequency independent data and header data to the receiver at a fixed rate, the fixed rate being a screen to screen ratio less than the frequency of the associated data clock signal;(f) receiving frequency independent data and the determined ratio;(g) converting the frequency independent data into frequency dependent data based on the determined determined ratio.17.The method of claim 16 further comprising:(h) When there is no frequency independent data transmission, the idle code is sent to the receiver to maintain the constant data stream to the receiver.18.The method of claim 16 further comprising:(h) When there is no header data to transmit, the idle code is sent to the receiver to maintain a constant data stream to the receiver.19.The method of claim 16 further comprising:(h) When there is no header data and frequency independent data transmission, the idle code is transmitted to the receiver to maintain a constant data stream to the receiver.20.A system for transferring data between a remote central computing device and a local workstation, comprising:a remote central computing device having a plurality of primary processing devices;An electrical/optical interface operatively coupled to the remote central computing device to provide a separate communication channel for each primary processing device;a plurality of communication cables operatively coupled to the electrical/optical interface;a local workstation operatively connected to the communication cable;Each of the communication cables includes:optical fiber;Enveloping the optical fiber to protect the sheath of the optical fiber;a tensile member located within the sheath to provide tensile strength to the optical fiber;The electrical/optical interface includes:a first circuit, the first circuit receiving frequency related data from a graphics data source coupled to the first main processing device, the frequency related data having a preset resolution format and a data clock frequency associated therewith, and generating timing therefrom Information and frequency independent data;a first transmitter operatively coupled to a communication channel coupled to the first primary processing device to transmit timing information and frequency independent data along the optical fiber at a fixed rate;The local workstation includes a workstation interface;The workstation interface includes:a circuit operatively coupled to the communication cable to receive timing information and frequency independent data;An extraction circuit operatively coupled to the circuitry to extract frequency dependent data having a predetermined resolution associated therewith based on the received timing information;An operative circuit is operatively coupled to transmit frequency related data having a preset resolution associated therewith to a display circuit of the display device.21.The system of claim 20 wherein said workstation interface transmits data from said local workstation to said remote central computing device along said communication cable.22.The system of claim 20 wherein said workstation interface transmits data from said local workstation to said remote central computing device along said tension member of said communication cable.23.The system of claim 20 wherein said workstation interface transmits data from said local workstation to said remote central computing device along said optical fiber of said communication cable.24.The system of claim 20 wherein said interface communicates non-graphical data from said remote central computing device to said local workstation along said communication cable.25.The system of claim 20 wherein said interface transmits non-graphic data from said remote central computing device to said local workstation along said tension member of said communication cable.26.The system of claim 20 wherein said interface transmits non-graphic data from said remote central computing device to said local workstation along said optical fiber of said communication cable. |
System and method for providing fixed rate transmission for digital image interface and high definition multimedia interface applicationsTechnical fieldThe present invention relates to a method and system for transmitting video data over a reduced number of digital images and/or high definition multimedia interface channels.Background techniqueDigital image interfaces and high definition multimedia interfaces are high speed serial interconnect standards to transfer graphics data from sources to certain types of displays. These standards operate at very low differential levels at a wide range of data rates. Combined with high data rates (250Mb/s-1.65Gb/s), low voltage variations (800mV), signal reflections from cables and connectors, and compatibility issues between transmitter and receiver manufacturers As a result, the interface connection is limited to a relatively short distance.One solution to the limitation of relatively short distances is to transmit digital image interfaces and/or high definition multimedia interface data over the fiber to increase the distance between the data source and the display. This solution is achieved by converting each electrical bit into an optical on/off state with a laser. The receiver at the other end of the fiber uses optical detectors and electronics to convert the optical state into an electrical state.However, this approach requires each electrical channel to be mapped 1:1 into the fiber channel. In current graphics and video applications using digital image interfaces and/or high-definition multimedia interfaces, three channels are used for graphics data, one channel for clocks, one channel for upstream control data, and one channel for downstream Control data.Fig. 1 shows an example of the conventional system. In FIG. 1, digital video source 20 is optically coupled to display device 30 via fiber optic cable 10. The system requires multiple lasers, detectors, and fibers to establish a link between source 20 and display 30.As shown in Figure 2, the system of Figure 1 requires a large amount of fiber and increases the cost of the system. In FIG. 2, the optical cable 100 includes three optical fibers (A, B, C) for graphic data, one optical fiber (D) for clocks, one optical fiber (E) for uplink control data, and downlink control data. One fiber (F). It is noted that although a smaller number of fibers can be used, in such a configuration, the control data and the return data system are omitted, thereby being inconsistent with the specifications of the digital image interface and/or the high definition multimedia interface.As described above, large-capacity information can be transmitted quickly and reliably using optical fibers. The fiber includes quartz fibers such as quartz single mode fibers, plastic fibers, and other fibers. In particular, plastic optical fibers have a larger diameter than quartz single-mode fibers and perform excellently in flexibility. In view of this, an optical fiber cable using a plastic optical fiber having an optical transmission line has excellent workability in final processing and fiber bonding processing required at the time of installation and wiring. It is effective to use the cable as a short-distance trunk in a building after being introduced from a trunk cable, a split cable, or a line cable of a LAN system.Optical cables are often configured with an outer sheath to cover the optical fibers and tensile strength stiffeners (tension members) that are used to prevent tension of the fibers. Typically, the surface of the fiber is coated with an original resin coating to prevent interference with light entry and to avoid damage due to mechanical external forces or for other reasons. Where the cable is used for communication, it typically contains two or more fibers of input and output.As noted above, some fiber optic cables use additional tension members within the envelope of the fiber optic assembly to provide greater tensile strength than the fibers used in the assembly. This helps to reduce cable stress, which over time increases the loss in the fiber. In plastic fibers, the addition of additional tensile members to the fiber optic assembly is often used, but can also be used for any fiber type that benefits from additional tensile strength.With respect to another example of a conventional digital image interface and/or high definition multimedia interface system, the data transfer system forwards or returns data from point A to point B. The amount of data transmitted by the data transfer system in one direction is different from the amount of data transferred in the other direction. More specifically, in the conventional system, point A can transmit data to B at a rate of 2 Gb/s, but point B can only transmit 1 Mb/s of data to point A. Typically, this type of system requires two Fibre Channels, one for high speed downstream data and the other for low speed upstream data, or a single mode system that can generate bidirectional data streams with two different wavelengths. Additional circuitry has been added.In addition, graphics applications operate at different clock rates for different display resolutions. However, in many data transfer architectures, it is advantageous to transfer data at a fixed data rate. A problem involved in achieving this advantage is to provide an appropriate conversion of the variable rate data received by the converter to a fixed actual transmitted data rate, and then again convert the fixed rate data to variable rate data without loss.Finally, the digital image interface and/or high definition multimedia interface system sends graphics data and control data from the source to the display and control data from the display to the source. Traditionally, graphics data is transmitted at high data rates while control information is transmitted at low data rates. Since control data flows in both directions, conventional systems utilize bidirectional links. However, the use of a two-way link adds additional channels to the communication cable, thereby increasing cost.Accordingly, it is desirable to provide a digital image interface and/or high definition multimedia interface system that provides a fixed data transfer rate between source and display components, from variable data rates to fixed data rates and without data loss. Appropriate conversion of fixed data rate to variable data rate conversion.Additionally, it is desirable to provide a digital image interface and/or high definition multimedia interface system that utilizes a communication cable that provides two-way communication of control data without increasing the cost of the cable.Also, it is desirable to provide a digital image interface and/or high definition multimedia interface system that utilizes two-way communication of control data without increasing the cost of the system.In addition, it would be desirable to provide a digital image interface and/or high definition multimedia interface system that employs a protocol that reduces the number of channels required in a communication cable.Summary of the inventionA first aspect of the invention is a method for transmitting variable frequency related data. The method receives frequency related data and associated data clock signals; converts the frequency dependent data into frequency independent data; determines a ratio of the number of data clock cycles to the number of reference clock cycles; transmits the determined ratio; and compares the frequency related data at a fixed rate The header data is transmitted to the receiver at a fixed rate that is greater than the frequency of the associated data clock signal frequency; the received frequency independent data and the determined ratio are; and the frequency independent data is converted to frequency dependent data based on the received determination ratio.A second aspect of the invention is a method for transmitting variable frequency dependent data. The method receives frequency related data having a preset resolution format associated therewith; determining timing information from the received frequency related data; converting the received frequency related data into frequency independent data; using the determined timing information pair Frequency independent data encoding; transmitting frequency independent data encoded by timing information to a receiver at a fixed rate; receiving frequency independent data encoded by timing information; extracting timing information from frequency independent data encoded by timing information; and based on the extracted timing information Regenerate frequency related data with a preset resolution associated with it.A third aspect of the invention is the transfer of graphics data generated by a graphics data source to components of a display device. The component includes circuitry for receiving frequency dependent data having a resolution format and a clock data frequency associated therewith from a graphics data source, and generating frequency independent data encoded by the timing information therefrom; and encoding the timing information independently at a fixed rate The data is transferred to the transmitter of the display device.A fourth aspect of the invention is a system for transmitting graphics data generated by a graphics data source to a display device. The system includes: a communication channel; a first circuit that receives frequency related data having a resolution format and a data clock frequency associated therewith from a graphics data source and generates frequency independent data from which timing information is encoded; and is operatively coupled to the communication channel a first transmitter for transmitting frequency independent data encoded with timing information at a fixed rate; a second circuit operatively coupled to the communication channel for receiving frequency independent data encoded by the timing information; operatively coupled to the second circuit for encoding from the timing information Extracting timing information from the frequency independent data and regenerating a third circuit having frequency dependent data of a preset resolution associated therewith based on the extracted timing information; and operatively coupled to the third circuit to have associated with The frequency dependent data of the preset resolution is transmitted to the second transmitter of the display device.Another aspect of the invention is a method of transmitting variable frequency related data. The method receives frequency independent data and associated data clock signals; converts frequency dependent data into frequency independent data; determines a ratio of a number of data clock cycles to a number of reference clock cycles; transmits the determined ratio; transmits frequency independent data at a fixed rate and Head data to the receiver; the fixed rate frequency is less than the frequency of the associated data clock signal; the received frequency independent data and the determination ratio; and the frequency independent data is converted to frequency dependent data based on the received determination ratio.Another aspect of the invention is a system for transmitting graphics data generated by a graphics data source to a display device. The system includes: a communication channel having an optical fiber encasing the optical fiber to protect the sheath of the optical fiber; and a tension member located within the sheath to provide tensile strength to the optical fiber; and a first circuit for receiving frequency related data from the graphic data source, the frequency correlation The data has a preset resolution format and a data clock frequency associated therewith, and generates timing information and frequency independent data therefrom; and is operatively coupled to the communication channel to transmit timing information along the tension member at a fixed frequency and along the fixed rate a first transmitter for transmitting frequency independent data; a second circuit operatively coupled to the communication channel for receiving timing information and frequency independent data; operatively coupled to the second circuit for extracting associated with the received timing information based thereon a third circuit of frequency dependent data of a preset resolution; and a second transmitter operatively coupled to the third circuit to transmit frequency related data having a preset resolution associated therewith to the display device.Another aspect of the invention is a point-to-point communication cable. The point-to-point communication cable includes a first interface having first and second communication components to provide a communication channel; a second interface having third and fourth communication components to provide a communication channel; and a first operatively coupled to the first interface a communication component and a third communication component of the second interface to provide an optical fiber for the communication channel between the first interface and the second interface; a sheath encasing the optical fiber to protect the optical fiber and a tensile member located within the sheath to provide tensile strength to the optical fiber . A tension member operatively coupled to the second communication component of the first interface and the fourth communication component of the second interface provides an electrical path between the first interface and the second interface.Another aspect of the invention is a communication system for providing data transfer between two devices. The communication system includes a point-to-point communication cable having: a first interface including first and second communication components to provide a communication channel; a second interface having third and fourth communication components to provide a communication channel; An optical fiber operatively coupled to the first communication component of the first interface and the third communication component of the second interface to provide a communication channel between the first interface and the second interface; wrapping the optical fiber to protect the optical fiber of the foreskin and located in the foreskin A first tensile member that provides tensile strength to the optical fiber. A first tension member operatively coupled to the second communication component of the first interface and the fourth communication component of the second interface provides an electrical path between the first interface and the second interface. The communication system also includes: a current source operatively coupled to the second communication component to provide current to the first tension member; operatively coupled to the fourth communication component to modulate in response to data generated by the device coupled to the second interface a switch that flows current through the first tension member; and a current monitor operatively coupled to the second communication component to monitor the modulated current and generate a data signal responsive thereto.Another aspect of the invention is a method of transmitting graphics data from a source to a receiver. The method converts frequency dependent data into frequency independent data; transmits clock data from a source at a fixed rate, the clock data corresponding to a source pixel clock frequency associated with the frequency dependent data; transmitting frequency independent data from the source at a fixed rate; The receiver receives frequency independent data and clock data; stores the received frequency independent data into a memory; and based on the received clock data, regenerates at the receiver a frequency having a source pixel clock frequency associated with the frequency related data The pixel clock signal of the frequency; and the stored data is retrieved from the memory using the regenerated pixel clock signal to produce frequency dependent data.Another aspect of the invention is a system for regenerating and transmitting graphics data from a source to a receiver. The system includes a graphics data source having circuitry for converting frequency dependent data associated with a source pixel clock frequency into frequency independent data and transmitting a clock corresponding to a source pixel clock frequency associated with the frequency dependent data at a fixed rate A transmitter that transmits data at a fixed rate and that is communicatively coupled to the source. The receiver includes: a memory that stores the received frequency independent data; a digital clock synthesizer that regenerates a pixel clock signal having a frequency corresponding to a frequency of a source pixel clock frequency associated with the frequency related data based on the received clock data; The generated pixel clock signal retrieves the stored data from memory to generate a pick-up circuit for the frequency dependent data.Another aspect of the invention is a component for converting frequency independent data into frequency dependent data. The component includes: a receiver that receives frequency independent data at a fixed data rate; a memory that stores frequency independent data; and a digital clock synthesizer that regenerates a pixel clock signal having a frequency corresponding to a frequency of a source pixel clock frequency associated with the frequency related data; And a pick-up circuit that retrieves the stored data from the memory using the regenerated pixel clock signal to produce frequency dependent data.Another aspect of the invention is a system for transmitting graphics data generated by a graphics data source to a display device. The system includes: a communication channel; a first circuit for receiving frequency related data from a graphics data source, the frequency related data having a preset resolution format and a data clock frequency associated therewith, and generating timing information and frequency independent data therefrom; a first transmitter coupled to the communication channel to transmit frequency independent data and timing information at a fixed rate; a second circuit operatively coupled to the communication channel to receive timing information and frequency independent data; a memory storing frequency independent data; based on the received And a digital clock synthesizer that regenerates a pixel clock signal having a frequency corresponding to a frequency of a source pixel clock frequency associated with the frequency dependent data; using the regenerated pixel clock signal to retrieve the stored data from the memory to generate a frequency correlation a pick-up circuit for data; and a second transmitter operatively coupled to the pick-up circuit to communicate frequency-related data having a preset resolution associated therewith to the display device.Another aspect of the present invention is to provide a method of baseband directed graphics data communication between a graphics data source device and a display device. The method transmits display data and control data from the graphics data source device to the display device over the first communication channel during the data cycle and transfers the return data from the display device to the graphics data source device over the first communication channel during the non-data cycle.Another aspect of the present invention is to provide a method of baseband directed graphics data communication between a graphics data source device and a display device. The method transmits the start of the data cycle signal from the graphics data source device to the display device through the first communication channel; transmits the display data from the graphics data source device to the display device through the first communication channel; and transmits the data cycle signal through the first communication channel The end is transferred from the graphics data source device to the display device; and in response to the end of the transmitted data cycle signal, the data is returned from the display device to the graphics data source device via the first communication channel.Another aspect of the invention is a system for providing baseband oriented graphics data communication. The system includes: a graphics data source device that generates display data and control data; a display device that displays display data and generates return data; and a first device that is operatively coupled to the graphics data source device and the display device to provide a communication channel therebetween Communication channel. The graphics data source device includes: a source transmitter to transmit a start of a data cycle signal, an end of a data cycle signal, and display data; a source receiver that receives and returns data; is operatively coupled to the source transmitter, the source receiver, and the A source switch for a communication channel. The source switch connects the source transmitter to the first communication channel in response to the beginning of the data cycle signal. The source switch connects the source receiver to the first communication channel in response to the end of the data cycle signal. The display device includes: a display transmitter that transmits return data; a display receiver that receives a data cycle signal start, a data cycle signal end, and display data; and a source switch that is operatively coupled to the display transmitter, the display receiver, and the first communication channel. The display switch connects the display transmitter to the first communication channel in response to the end of the data cycle signal. The display switch connects the display receiver to the first communication channel in response to the beginning of the data cycle signal.Another aspect of the invention is a system for transferring data between a remote central computing device and a local workstation. The system includes: a remote central computing device having a plurality of primary processing devices; an electrical/optical interface operatively coupled to the remote central computing device to provide separate communication channels for each primary processing device; operatively coupled to the electrical/optical interface Multiple communication cables; and local workstations operatively coupled to the communication cable. Each communication cable includes an optical fiber that encases the optical fiber to protect the sheath of the optical fiber and a tensile member that is positioned within the sheath to provide tensile strength to the optical fiber. The electrical/optical interface includes: a first circuit to receive frequency related data from a graphics data source coupled to the first primary processing device, the frequency related data having a predetermined resolution format and a data clock frequency associated therewith, and generating therefrom Timing information and frequency independent data; and a first transmitter operatively coupled to the communication channel associated with the first primary processing device to transmit timing information and frequency independent data along the optical fiber at a fixed rate. The local workstation includes a workstation interface having circuitry operatively coupled to the communications cable for receiving timing information and frequency independent data; operatively coupled to the circuitry to extract a preset resolution associated therewith based on the received timing information And a decimation circuit for the frequency dependent data, and a display circuit operatively coupled to the decimation circuit to transmit frequency related data having a preset resolution associated therewith to the display device.Another aspect of the invention is a system for transferring data between a remote central computing device and a local workstation. The system includes: a remote central computing device having a plurality of primary processing devices; an electrical/optical interface operatively coupled to the remote central computing device to provide separate communication channels for each primary processing device; operatively coupled to the electrical/optical interface Multiple communication cables; and local workstations operatively coupled to the communication cable. Each communication cable includes an optical fiber that encases the optical fiber to protect the sheath of the optical fiber and a tensile member that is positioned within the sheath to provide tensile strength to the optical fiber. The electrical/optical interface includes: a first circuit to receive frequency related data from a graphics data source coupled to the first primary processing device, the frequency related data having a predetermined resolution format and a data clock frequency associated therewith, and generating therefrom Timing information and frequency independent data; and a first transmitter operatively coupled to the communication channel associated with the first primary processing device to transmit timing information and frequency independent data along the optical fiber at a fixed rate. The local workstation includes: a workstation interface having circuitry operatively coupled to the communications cable for receiving timing information and frequency independent data; a memory storing frequency independent data; regenerating the frequency based on the received timing information corresponding to the frequency related data a digital clock synthesizer of the pixel clock signal of the frequency of the source pixel clock; a capture circuit that retrieves the stored data from the memory using the regenerated pixel clock signal to generate frequency dependent data; and is operatively coupled to the decimating circuit The frequency related data having the preset resolution associated therewith is transmitted to the display circuit of the display device.DRAWINGSThe invention can be implemented in various configurations and combinations of components and various steps and steps. The drawings which are only intended to illustrate the preferred embodiments are not to be construed as limiting the invention, in the drawings:Figure 1 illustrates a prior art digital video data source/display system;2 shows a prior art communication cable for use in FIG. 1 for carrying digital video data;Figure 3 illustrates a digital video data source/display system in accordance with the teachings of the present invention;Figure 4 shows a communication cable for use in Figure 3 for carrying digital video data in accordance with the teachings of the present invention;5 is a schematic diagram of line and frame timing for digital display data in accordance with the teachings of the present invention;Figure 6 is a schematic diagram of the graphic data signal of Figure 5;Figure 7 is a block diagram showing a protocol for generating timing information in accordance with the concepts of the present invention;Figure 8 illustrates a structure for generating a data stream from three data streams in accordance with the teachings of the present invention;9 is a block diagram of a fixed rate optical extender of a digital video interface in accordance with the teachings of the present invention;10 is a block diagram showing a second fixed rate optical extender of a digital video interface in accordance with the teachings of the present invention;Figure 11 is an illustration of an optical communication system in accordance with the teachings of the present invention;Figure 12 illustrates a digital video data communication cable in accordance with the teachings of the present invention;Figure 13 is a diagram showing the use of current modulation to transmit digital video data in accordance with the teachings of the present invention;Figure 14 is a block diagram showing the use of current modulation to transmit digital video data in accordance with the teachings of the present invention;Figure 15 is a block diagram of a transmitter/receiver pair of digital video data transmitted at a fixed rate in accordance with the teachings of the present invention;16 is a block diagram of circuitry for generating a protocol for transmitting digital video data in accordance with the teachings of the present invention;17 shows a block diagram of another transmitter/receiver pair that transmits digital video data at a fixed rate in accordance with the teachings of the present invention;Figure 18 illustrates the utilization of a memory unit that converts data rates in accordance with the teachings of the present invention;19 and 20 illustrate memory storage conditions for a certain period of time;21 illustrates a transmitter/receiver system between a data source and a display in accordance with the teachings of the present invention;22 is a diagram of a communication flow between a data source and a display in accordance with the teachings of the present invention;23 is a block diagram of the communication process of FIG. 22 in accordance with the concepts of the present invention;24 is a block diagram of utilizing the concepts of the present invention in a remote workstation environment.Detailed waysThe invention will now be described in conjunction with the preferred embodiments; however, it is to be understood that the embodiments described herein are not intended to limit the invention. Rather, the invention is to cover all modifications, variations and equivalents of the inventions.For the general understanding of the invention, reference numerals are given in the figures. In the drawings, the same reference numerals are used to refer to the It is also noted that the various figures of the present invention are not to scale, and that certain aspects are deliberately disproportionately drawn to present the features and concepts of the present invention.As noted above, it is desirable to reduce the number of channels required to transmit digital image interfaces and/or high definition multimedia interface data. By reducing the number of channels, the number of fibers, detectors, lasers, and supporting integrated circuits is also reduced, thereby providing a much more cost effective solution without adversely affecting image or data quality.An example of this system is shown in Figure 3. As shown in FIG. 3, digital video source 20 is optically coupled to display device 30 via fiber optic cable 11. As shown in FIG. 4, the system of FIG. 3 only needs to provide one channel (A) for graphics data, one channel (B) for clock data, uplink control data, and downlink control data, and the system does not require a large number of lasers and detectors. And fiber optics to establish a link between source 20 and display 30. This is done by specifying different configuration protocols, as explained in more detail below.In order to reduce the number of digital image interfaces and/or high definition multimedia interface channels, additional bandwidth in the graphics data stream is utilized, and various rates of digital image interface and/or high definition multimedia interface resolution are converted. At a fixed data rate.In a preferred embodiment, the fixed data rate may be a higher rate than the highest digital image interface and/or high definition multimedia interface resolution. By establishing a fixed data rate at a higher rate than the highest digital image interface and/or high definition multimedia interface resolution, multiple channels can be converted into one downstream channel and one upstream channel.The Video Electronics Standards Association (VESA) is the standard entity for setting video and graphics resolution standards. The VESA standard is used as an input and output format for digital image interfaces and/or high resolution multimedia interface transmitters and transmitters. The VESA standard also defines the amount of data activation time and blanking time (no data period). This specification will show the timing of the data being decomposed into lines (one line of display data) and frames (the time from the first line of data until the time the line receives new data). An illustration of such a specification is shown in Figures 5 and 6, wherein Figure 5 shows the line timing and frame timing of the digital display data and Figure 6 is an illustration of the graphical data signal of Figure 5.As shown in FIG. 5, there is a blanking time before/after activation of the display data. As shown in FIG. 5, there is also a blanking time before/after the last line of the display data. The next line of data will start again at the top of the screen representing the next video/graphic frame. In order to convert multiple channels into one downstream channel and one upstream channel, the system shown in Figure 7 measures the data initially entered into the system (40) and produces timing information included in the header information (42). The header information is multiplexed with the graphics data and the idle code (44) to produce a stream of serial data. In this system, when there is no graphic data or header data transmission, an idle code is transmitted. Sending an idle code when there is no graphics data or header data for transmission allows the fixed rate data stream to continue to transmit information to keep the receiver locked. In other words, when a very low input data rate is used, the idle code is sent to maintain this fixed data stream.Figure 8 shows a data architecture using three different data lines (plus control data) and placing them in one data stream. It is possible that the data in the string of data streams is encoded to extract data at the receiver without using a separate clock signal at a fixed rate frequency. More specifically, the receiver reads the header information at the other end and then regenerates the necessary timing information and extracts the data again into the correct resolution format.As described above and as shown in FIG. 8, timing information generated from the VSYNC, HSYNC, DataCLOCK, and PixelCLOCK signals is placed at the head of each data packet. Graphics data from various data channels (red, green, and blue) are encoded and multiplexed to follow the header information. If necessary, when there is no graphic data or header data to be sent, the idle code is sent to keep the receiver locked. As noted above, Figure 9 is a block diagram of a fixed rate optical extender of a digital video interface in accordance with the teachings of the present invention.As shown in FIG. 9, graphics data and timing signals are fed from graphics card 200 to a variable rate-fixed rate digital image and/or high definition multimedia transform component 210. The variable rate-fixed rate digital graphics and/or high resolution multimedia conversion component 201 includes a digital image interface and/or a high definition multimedia interface receiver 212. Digital image interface and/or high definition multimedia interface receiver 212 measures various timing signals to generate timing information, wherein the timing information is fed to graphics encoder fixed rate circuit 214. Graphics encoder fixed rate circuit 214 also receives graphics data from graphics card 200.The graphics encoder fixed rate circuit 214 generates header information from the timing information and encodes a plurality of channels of graphics data, i.e., red, green, and blue data channels. The graphics encoder fixed rate circuit 214 also transmits header information with graphics data and appropriate idle codes to the serializer 216 as needed. Serializer 216 multiplexes the information to generate a serial data stream having a fixed data rate.A serial data stream having a fixed data rate is converted to a stream of optical pulses by VCSEL driver 220 and VCSEL 230. Light pulses are fed to interface block 260 for transmission over the fibers in cable 400, thereby ultimately being displayed on display device 300.At the display device end, interface block 370 receives light pulses from the fibers in cable 400. The light pulses are converted to electrical signals by PIN 340, TIA 330, and limiting amplifier 320. The fixed data rate electrical data stream is deserialized by deserializer 316. The deserialized data is decoded by graphics decoder fixed rate circuit 314 to produce graphics data and timing information. The timing information is converted to a timing signal by a digital image interface and/or high definition multimedia interface transmitter 312. The timing signals and the decoded graphics data are fed to the display device 300 to correctly display the images or information.Control data from monitor 300 is fed to graphics decoder fixed rate circuit 314 for transmission to the data source. Control data is converted to a stream of optical pulses by LED driver 320 and LED source 330. Light pulses associated with the control data are sent to interface block 370 for transmission over the fiber in cable 400 for eventual use by graphics encoder fixed rate circuit 214. The light pulses associated with the control data are converted to electrical signals by LED detector 250 and TIA 240.10 is a block diagram showing another fixed rate optical extender of a digital video interface in accordance with the teachings of the present invention. As shown in FIG. 10, the digital image interface and/or high resolution multimedia interface 510 generates graphics data and timing information for feeding to a digital image interface and/or a high resolution multimedia interface receiver 520. Digital image interface and/or high resolution multimedia interface receiver 520 measures various timing signals to generate timing information, wherein the timing signals are fed to programmable gate array 550.The programmable gate array 550 generates header information from the timing information and encodes a plurality of channels of the graphics data, ie, red, green, and blue data channels. Programmable gate array 550 also transmits header information with graphics data and appropriate idle codes to digital-to-optical converter 560 as needed. Digital-to-optical converter 560 converts the data into a stream of optical pulses. Light pulses are fed to optical transceiver 570 for transmission along the fiber of cable 400.At the display device end, optical transceiver 670 receives light pulses from the fibers in cable 400. The light pulses are converted to electrical signals by an optical to digital converter 660. The fixed data rate electrical data stream is decoded by programmable gate array 650 to produce graphics data and timing information. The timing information is converted to a timing signal by a digital image interface and/or high definition multimedia interface transmitter 620. The timing signals and decoded graphics data are fed to a digital image interface and/or a high resolution multimedia interface 610.As described above, with respect to the conventional digital image interface and/or high definition multimedia interface system, the data transfer system transfers data from point A to point B; however, the amount of data transmitted by the data transfer system in one direction is The amount of data sent in the other direction is different. More specifically, in the conventional system, point A can transmit 2 Gb/s of data to point B, while point B can only transmit 1 Mb/s of data to point A. Typically, this type of system requires two Fibre Channels, one for high speed downstream data and the other for low speed upstream data, or a single mode system with bidirectional data streams of two different wavelengths, which adds additional Circuit.To avoid the above problem, as shown in FIG. 11, one scheme uses an optical fiber to transfer high speed data from point A to point B, but uses an electrical signal carrying medium to transfer data from point B to point A. For example, as shown in FIG. 12, the fiber optic assembly further includes fibers (r1, r2, r3, and r4) for high data rate signals and tension members (T1, T2) designed from low resistance materials. Tension members (T1, T2) can be used to carry electrical signals at lower data rates.There are many ways to establish an electrical signal on the tension members (T1, T2). The tension members (T1 and T2) can carry DC signals such as power and ground, and a combination of DC level and AC components can also be used to supply power. As shown in Figure 13, low frequency modulation can be embedded in these signals to provide low data rate information.Another example may utilize current modulation as shown in FIG. In FIG. 14, current from the power source 1100 flows through the current monitor 1110 before being transmitted on the tension members (T1, T2). At the other end, parallel to the remote system 1130, the current modulator 1120 modulates the current in response to the received data. Current modulator 1120 causes the modulation to be reflected back at current monitor 1110 to capture data corresponding to the modulation.In the various solutions described above, it is required to transmit data at a fixed data rate. Transmitting data at a fixed rate requires converting variable rate data to a fixed rate circuit. In order to convert one data rate to another, some type of memory device is also needed. This allows data to be written to the memory device at one rate and read out at another rate. For example, a FIFO (First In First Out) type of memory cell can be used.As shown in Figure 15, the data transfer system has a transmitter circuit at one end and a receiver circuit at the other end. In such a graphics system, the data entering the transmitter can be at various rates based on user requirements and display performance. The graphics resolution used determines the pixel clock frequency of the display system. The transmitter uses the storage unit 1200 to convert the variable rate input to a fixed rate. Fixed rate data is transmitted to the receiver at the other end through some type of medium or channel. The receiver receives the fixed rate data and stores the data in the storage unit 1250. Reading data from storage unit 1250 requires the same rate as storage unit 1200 that reads data into the transmitter at the other end of the link.However, the actual pixel clock from the transmitter is not sent with the fixed rate data. The pixel clock at the receiver is not regenerated. The pixel clock at the receiver must match the pixel clock of the transmitter, or over time, the memory may overfill or not fill the memory unit 1250.More specifically, as shown in Figure 15, if Z = X, the data entering and leaving the system will be the same. On the other hand, if Z>X, the data leaving the system will be faster than the data entering the system, which would cause the storage unit 1250 to request more data than the available data, thereby causing an unfilled condition. Finally, if Z < X, the data leaving the system will be slower than the data entering the system, which causes the storage unit 1250 to store too much data, thereby overfilling as time passes.Overfilling or not filling the storage unit 1250 will cause an error in displaying the image. Either there is not enough data in the storage unit 1250 and the data will be discarded or there is too much data in the storage unit 1250 to display all the images. Since the receiver's clock rate is very close to the transmitter's clock rate, the error will appear relatively slow, essentially causing the image to scroll.Another way to avoid memory storage problems on the receiver is to send a reference clock. Using additional clock signals can cause additional noise, use additional data channels, and fixed rate systems are no longer fixed rates. In addition, in order to avoid memory storage problems, it requires that the pixel clock be generated again at the receiver and no additional clock lines are needed. In this example, the transmitter does not need to send a separate synchronous clock to the receiver to achieve pixel clock alignment. A general protocol with a counter, clock synchronizer is used in the feedback loop to determine the correction in the pixel frequency, whereby it does not cause the memory to be overfilled and unfilled and produces an error free image.As shown in FIG. 17, the data transfer system has a transmitter circuit on one end and a receiver circuit on the other end. In this graphics system, the data in the transmitter can be at various rates based on user requirements and display performance. The graphics resolution used determines the pixel clock frequency of the display system.The transmitter uses the storage unit 1400 to convert the unknown rate input to a fixed rate. Fixed rate data is transmitted along some types of media or channels to receivers at the other end. The receiver receives the fixed rate data and stores the data in storage unit 1450. Reading data from storage unit 1250 requires the same unknown rate as storage unit 1400 that reads data into the transmitter at the other end of the link.However, the actual pixel clock from the transmitter is not sent with the fixed rate data. The pixel clock at the receiver is not regenerated. The pixel clock at the receiver must match the pixel clock of the transmitter, or over time, the memory may overfill or not fill the memory unit 1450.When the system is powered up, the transmitter transmits an estimate of the pixel clock frequency. This is done by counting the number of clock transients at a given time. As shown in FIG. 16, the counter 1300 counts the number of pixel clock transients between horizontal sync signals. The reference clock is also counted during the same period. As shown in FIG. 16, the counter 1350 counts the number of reference clock transients between horizontal sync signals. The pixel clock and the reference clock are not synchronized, nor are they integer multiples (they are not derived from the same clock source).A non-synchronous clock will cause quantization errors in the measurement. This is caused by the uncertainty of the two clock relationships (in any case, the rising edge of the sample clock is in various relationships with the measured clock (before, after, or at the same time)). Whenever the two are separated, the actual count is not a complete value, but a percentage. Since the result is not an integer, measurement errors are generated. The receiver also uses the known reference frequency for measurement and clock regeneration. By using the reference clock and the count value sent by the transmitter, an approximation close to the pixel clock frequency can be obtained.The digital clock synthesizer is used to regenerate the receiver pixel clock frequency based on the percentage information transmitted in the protocol. However, due to the error in the count value and the rounding error in the percentage calculation, the resulting pixel clock frequency of the receiver is not exactly the same as the pixel clock of the transmitter. The error will cause the overflow and underflow errors in the memory cell 1500 of the receiver of Figure 18.To determine a more accurate pixel clock frequency and avoid overflow and underflow conditions, control circuitry is used to monitor the receiver's memory usage. The control system provides information to the digital clock synthesizer to change the resulting pixel clock frequency. Select a reference point that repeats at regular intervals in time. This reference point is used as a guide to indicate the memory usage mode.In this example, the horizontal frequency is used as the reference point. The number of memories used is stored at each rising edge of the horizontal line signal. The measurement is performed again on the next horizontal line signal edge. The position of the second measurement is compared to the first measurement.If the use of memory increases, the resulting pixel clock is too slow and the digital synthesizer needs to increase the frequency. If the use of memory is reduced, the digital clock synthesizer needs to reduce the frequency. This measurement feedback loop is in a fixed operation; mainly because the digital synthesizer can never regenerate the same frequency as the transmitter. Over time, the receiver's pixel clock is at two different frequencies just above or below the actual transmitted clock frequency. The average of the two values is the same frequency as determined by the pixel clock of the transmitter.In the display example, a comparison is performed line by line. If the number of memories used in the storage unit is changed from the previous time as shown in FIGS. 19 and 20, wherein FIG. 20 shows the increase in the usage rate since the previous time shown by FIG. 19, the previous horizontal line is The signal measurement information is passed to the digital synthesizer to increase or decrease the synthesis frequency to more accurately match the transmitter's pixel clock. The system will assume that the digital synthesizer's pixel clock generation will never exactly match the transmitter's pixel clock. To overcome this problem, the pixel clock will operate between two frequencies, one slightly below the ideal frequency and the other slightly above the ideal frequency. Over time, the average will be the same frequency as the transmitter's pixel clock.When the system is powered up, the two operating frequencies will have a relatively large difference. When the system is operating, the difference between the two frequencies will decrease. This will continue until the difference is below any possible error by working at a frequency for an extended length of time. For system stability to all changes to the environment, additional monitors can be used to re-adjust the two frequencies if needed.As noted above, the digital image interface and/or high resolution multimedia interface is a graphics protocol that sends graphics data and control data from source 1800 of FIG. 21 to display 1850 of FIG. Control information is also sent from the display to the source 1800. The graphical information is a high data rate and the control information is a low data rate. Since control information flows in both directions, the system requires some type of bidirectional link. Since the return control data is not constant, when the control data is transmitted, the data rate is slow relative to the graphic data from the source to the display. The activation time of the return control data is only a very small percentage relative to the time of the downstream graphics data.One method of establishing such a bidirectional link is illustrated in Figures 22 and 23. This approach takes advantage of how to define a digital image interface and/or a high resolution multimedia interface protocol. Since the digital image interface and/or the high resolution multimedia interface is a graphical interface, it has a data period as well as a non-data period. The non-data period is at the end of each data line before the next horizontal sync signal. In addition, all lines are sent to the display and there is also a non-data time before the next vertical sync signal.The method provides bidirectional data by time multiplexing between the source and the display without the need to use two separate channels. When the display data or control data is not being transmitted, the source-to-display transmission will stop, and then the information from the display to the source can be transmitted using the switching structure shown in FIG.As shown in FIG. 23, source 2000 prepares graphics data and timing information to be transmitted to display 2400. When source 2000 transmits graphics data and timing information to display 2400 over communication channel 2200, switching circuit 2100 is configured to cause data to flow from source 2000 to display 2400. In addition, the configuration switching circuit 2300 causes data to flow from the source 2000 to the display 2400 when the source 2000 transmits graphics data and timing information to the display 2400 through the communication channel 2200.On the other hand, when the source 2000 does not transmit graphics data and timing information to the display 2400 through the communication channel 2200, the switching circuit 2100 is configured to cause data to flow from the display 2400 to the source 2000. In addition, configuration switching circuit 2300 is configured to cause data to flow from display 2400 to source 2000 when source 2000 does not transmit graphics data and timing information to the display via communication channel 2200.It is noted that the various embodiments described above can be used in a remote workstation/central processing environment as shown in FIG. In this environment, as shown in Figure 24, the central computing device or room 3000 includes all of the primary processing performance for each user, in the form of a "blade PC." The blade PC is the primary processing center for the system user, with each user assigned and connected to a separate blade PC. In other words, the blade PC is equivalent to the actual personal computer of the user in the distributed system.The remote workstation/central processing environment causes the primary processing device to be loaded in a temperature controlled environment. In addition, the remote workstation/central processing environment allows for the elimination of individual PC enclosures, allowing for shared power and reducing machine noise in the user's environment.As shown in Figure 24, each user has a monitor (3340, 3440 or 3540) at his workstation or desktop (3300, 3400 or 3500); such as a keyboard, pointing device (mouse, digital board and/or light pen) and / Or an input device (3320, 3420 or 3520) of a microphone or the like; and/or an input and/or output device (3330, 3430 or 3530) such as a storage device (CD R/W drive, DVD R/W drive, floppy disk and/or Or a removable storage device; a speaker; a docking station and/or a digital imager. Each station also includes an interface (3310, 3410 or 3510) that provides a bridge between the workstation device and the associated optical communication link (3200, 3210 or 3220).Various communication links are coupled to interface 3100 at central computing device 3000 such that each blade PC has an optical link to the associated workstation. The optical communication link (3200, 3210 or 3220) carries not only graphics data from the blade PC to the associated workstation, but also all data between the blade PC and various related workstation devices; that is, data generated by a keyboard or mouse. This data communication can be bidirectional.To facilitate proper communication between the central computing device 3000 and each workstation (3300, 3400 or 3500), the interface (3310, 3410 or 3510) includes the various components described above that facilitate optical-to-electrical conversion and electro-optical conversion. More specifically, in one possible embodiment of the invention, interface 3100 will measure various timing signals to generate timing information, wherein the timing signals are fed into a programmable gate array.The programmable gate array generates header information from the timing information and encodes a plurality of graphics data channels (eg, red, green, and blue data channels). The programmable gate array also transmits header information with graphics data and appropriate idle codes to the digital-to-optical converter as needed. A digital-to-optical converter converts the data into a stream of optical pulses. Light pulses are fed to the optical transceiver for transmission on one of the optical communication links (3200, 3210 or 3220), and the optical communication link (3200, 3210 or 3220) transmits the data to the appropriate workstation (3300, 3400 or 3500).At the workstation end, the interface (3310, 3410 or 3510) includes an optical transceiver that receives optical pulses from an optical communication link (3200, 3210 or 3220). The light pulse is converted into an electrical signal by an optical-to-digital converter. The fixed data rate electrical data stream is decoded by the programmable gate array to produce graphics data and timing information. The timing information is converted into a timing signal. The timing signals and the decoded graphics data are fed to a monitor or display device (3340, 3440 or 3540).As described above, the system transfers data from point A to point B; however, the amount of data transmitted by the system in one direction is different from the amount transmitted in the other direction. More specifically, in the conventional system, point A can transmit data to point B at a rate of 2 Gb/s, while point B can only transmit data to point A at a rate of 1 Mb/s. Typically, such a system requires two Fibre Channels, one for high speed downstream data and the other for low speed upstream data, or a single mode system capable of generating bidirectional data streams with two different wavelengths, which adds an additional Circuit.To avoid the above problem, as described above, one solution uses fiber optics to pass high rate data from point A to point B, and uses an electrical signal bearing medium to pass data from point B to point A. For example, a fiber optic assembly can include an optical fiber for high data rate signals and a tensile member designed with a low resistance material. The tension member is used to carry electrical signals at lower data rates.There are various ways to build electrical signals on tension members. The tension member can carry a DC signal, such as a power source and ground, and a combination of DC level and AC component can also be used to power. Low frequency modulation can be embedded in these signals to provide low data rate information. Another example uses current modulation as described above.It is noted that any data from the display to the source can be held in memory until one of the idle times occurs. The return data can then be sent to the same channel. It is also noted that at each end of the channel, various other techniques can be developed to cope with the transmission and reception of data at each endpoint.While the various examples and embodiments of the present invention have been shown and described, it is understood that |
The present disclosure relates to acquiring and releasing a shared resource via a lock semaphore and, more particularly, to acquiring and releasing a shared resource via a lock semaphore utilizing a state machine. |
What is claimed is: 1: A method of managing a lock utilized by a thread comprising: selecting an action to perform upon the lock, wherein the action is selected from a group comprising: acquiring the lock, trying to acquire the lock, and releasing the lock; asynchronously querying the current state of a lock, having a multi-value state; speculatively determining the next state of the lock ; and attempting to transition the lock from the queried current state to the speculatively determined next state. 2: The method of claim 1, further including, if the state transition fails and if the selected action was either acquiring or releasing the lock, repeating, until the state transition succeeds: asynchronously querying the current state of the lock; speculatively determining the next state of the lock; attempting to transition the lock from the queried current state to the speculatively determined next state. 3: The method of claim 2, further including, if the state transition succeeds, the selected action is acquiring the lock, and <Desc/Clms Page number 18> the speculatively determined next state represents the acquisition of the lock, indicating the acquisition of the lock. 4: The method of claim 3, further including, if the state transition succeeds, the selected action is acquiring the lock, and the speculatively determined next state does not represent the acquisition of the lock, adding the thread to the end of a queue of threads waiting to acquire the lock; waiting to receive notification that the thread may acquire the lock; and indicating the acquisition of the lock. 5: The method of claim 2, further including, if the state transition succeeds, and the selected action is releasing the lock, determining the number of threads in a queue to acquire the lock utilizing the speculatively determined next state of the lock. 6: The method of claim 5, further including, if the queue includes at least a first thread, removing the first thread from the queue; and notifying the first thread that the first thread has acquired the lock. <Desc/Clms Page number 19> 7: The method of claim 1, further including, if the selected action is trying to acquire the lock and the state transition fails, indicating that the lock was unable to be acquired. 8: The method of claim 1, further including, if the state transition succeeds and the selected action is trying to acquire the lock, indicating the acquisition of the lock. 9: The method of claim 1, further including, if the state transition succeeds, the selected action is acquiring the lock, and the speculatively determined next state represents the acquisition of the lock, indicating the acquisition of the lock. 10: The method of claim 9, further including, if the state transition succeeds, the selected action is acquiring the lock, and the speculatively determined next state does not represent the acquisition of the lock, adding the thread to the end of a queue of threads waiting to acquire the lock ; <Desc/Clms Page number 20> waiting to receive notification that the thread may acquire the lock; and indicating the acquisition of the lock. 11: The method of claim 1, further including, if the state transition succeeds, and the selected action is releasing the lock, determining the number of threads in a queue to acquire the lock utilizing the speculatively determined next state of the lock. 12: The method of claim 11, further including, if the queue includes at least a first thread, removing the first thread from the queue; and notifying the first thread that the first thread has acquired the lock. 13: The method of claim 1, wherein the thread includes: a unique thread identifier; a next thread field to facilitate access to the next thread in a queue of threads waiting to acquire the lock ; and the thread is only capable of waiting for a single lock at a time. 14: The method of claim 1, wherein the action of acquiring the lock includes the inability to timeout or fail to acquire the lock. <Desc/Clms Page number 21> 15: The method of claim 1, wherein the lock's current state may change between asynchronously querying the current state of the lock; and attempting to transition the lock from the queried current state to the speculatively determined next state. 16: An apparatus comprising : a lock, having a multi-state value, including: a flag value, a first thread value, and a last thread value; and a lock acquirer, which is capable of performing an acquisition of the lock via asynchronously querying the current state of the lock; speculatively determining the next state of the lock; and attempting to transition the lock from the queried current state to the speculatively determined next state. 17: The apparatus of claim 16, wherein the lock acquirer is further capable of performing two general actions, including acquiring the lock, trying to acquire the lock; and wherein, if the state transition fails and the general action is acquiring the lock, the lock acquirer is further capable of, repeating, until the state transaction succeeds: asynchronously querying the current state of the lock; speculatively determining the next state of the lock; and <Desc/Clms Page number 22> attempting to transition the lock from the queried current state to the speculatively determined next state. 18 : The apparatus of claim 17, wherein, if the state transition fails and the general action is trying to acquire the lock, the lock acquirer is further capable of, indicating that the lock was unable to be acquired. 19: The apparatus of claim 18, wherein, if the state transition succeeds and the general action is trying to acquire the lock, the lock acquirer is further capable of, indicating that the lock was acquired. 20: The apparatus of claim 16, wherein, if the state transition succeeds, the general action is acquire the lock, and the speculatively determined next state represents the acquisition of the lock, the lock acquirer is further capable of, indicating that the lock was acquired. 21: The apparatus of claim 20, wherein, if the state transition fails, the general action is acquire the lock, and <Desc/Clms Page number 23> the speculatively determined next state does not represent the acquisition of the lock, the lock acquirer is further capable of, adding the thread to the end of a queue of threads waiting to acquire the lock; waiting to receive notification that the thread may acquire the lock; and indicating the acquisition of the lock. 22: The apparatus of claim 21, wherein the lock acquirer is unable to timeout of fail if the selected general action is acquiring the lock. 23: An apparatus comprising: a lock, having a multi-state value, including: a flag value, a first thread value, and a last thread value; and a lock releaser, which is capable of releasing a hold on the lock via asynchronously querying the current state of the lock; speculatively determining the next state of the lock; and attempting to transition the lock from the queried current state to the speculatively determined next state. 24: The apparatus of claim 23, wherein, if the state transition fails, the lock releaser is further capable of, repeating, until the state transaction succeeds: asynchronously querying the current state of the lock; <Desc/Clms Page number 24> speculatively determining the next state of the lock; and attempting to transition the lock from the queried current state to the speculatively determined next state. 25: The apparatus of claim 23, wherein, if the state transition succeeds, the lock releaser is further capable of determining the number of threads in a queue of threads waiting to acquire the lock utilizing the speculatively determined next state of the lock. 26: The apparatus of claim 25, wherein, if the queue includes at least a first thread, the lock releaser is further capable of : removing the first thread from the queue; and notifying the first thread that the first thread has acquired the lock. 27: The apparatus of claim 26, wherein the lock releaser is capable of removing the first thread from the queue utilizing a thread having: a unique thread identifier; and a next thread value to facilitate access to the next thread in the queue. 28: The apparatus of claim 23, wherein the lock is capable of changing state in between the time the lock releaser asynchronously queries the current state of the lock; and <Desc/Clms Page number 25> attempts to transition the lock from the queried current state to the speculatively determined next state. 29: An article comprising: a storage medium having a plurality of machine accessible instructions, wherein when the instructions are executed, the instructions provide for: selecting an action to perform upon a lock utilized by a thread, wherein the action is selected from a group comprising: acquiring the lock, trying to acquire the lock, and releasing the lock; asynchronously querying the current state of a lock, having a multi-value state; speculatively determining the next state of the lock; and attempting to transition the lock from the queried current state to the speculatively determined next state. 30: The article of claim 29, further including instructions providing for, if the state transition fails and if the selected action was either acquiring or releasing the lock, repeating, until the state transition succeeds: asynchronously querying the current state of the lock; speculatively determining the next state of the lock; attempting to transition the lock from the queried current state to the speculatively determined next state. <Desc/Clms Page number 26> 31 : The article of claim 30, further including instructions providing for, if the state transition succeeds, the selected action is acquiring the lock, and the speculatively determined next state represents the acquisition of the lock, indicating the acquisition of the lock. 32: The article of claim 31, further including instructions providing for, if the state transition succeeds, the selected action is acquiring the lock, and the speculatively determined next state does not represent the acquisition of the lock, adding the thread to the end of a queue of threads waiting to acquire the lock; waiting to receive notification that the thread may acquire the lock; and indicating the acquisition of the lock. 33: The article of claim 30, further including instructions providing for, if the state transition succeeds, and the selected action is releasing the lock, determining the number of threads in a queue to acquire the lock utilizing the speculatively determined next state of the lock. <Desc/Clms Page number 27> 34: The article of claim 33, further including instructions providing for, if the queue includes at least a first thread, removing the first thread from the queue; and notifying the first thread that the first thread has acquired the lock. 35: The article of claim 29, further including instructions providing for, if the selected action is trying to acquire the lock and the state transition fails, indicating that the lock was unable to be acquired. 36: The article of claim 29, further including instructions providing for, if the state transition succeeds and the selected action is trying to acquire the lock, indicating the acquisition of the lock. 37: The article of claim 29, further including instructions providing for, if the state transition succeeds, the selected action is acquiring the lock, and the speculatively determined next state represents the acquisition of the lock, indicating the acquisition of the lock. <Desc/Clms Page number 28> 38 : The article of claim 37, further including instructions providing for, if the state transition succeeds, the selected action is acquiring the lock, and the speculatively determined next state does not represent the acquisition of the lock, adding the thread to the end of a queue of threads waiting to acquire the lock; waiting to receive notification that the thread may acquire the lock; and indicating the acquisition of the lock. 39: The article of claim 29, further including instructions providing for, if the state transition succeeds, and the selected action is releasing the lock,' determining the number of threads in a queue to acquire the lock utilizing the speculatively determined next state of the lock. 40: The article of claim 39, further including instructions providing for, if the queue includes at least a first thread, removing the first thread from the queue; and notifying the first thread that the first thread has acquired the lock. 41: The article of claim 29, wherein the thread includes: a unique thread identifier; <Desc/Clms Page number 29> a next thread field to facilitate access to the next thread in a queue of threads waiting to acquire the lock; and the thread is only capable of waiting for a single lock at a time. 42: The article of claim 29, wherein the action of acquiring the lock includes the inability to timeout or fail to acquire the lock. 43: The article of claim 29, wherein the lock's current state may change between asynchronously querying the current state of the lock; and attempting to transition the lock from the queried current state to the speculatively determined next state. 44: A system comprising: a memory element, capable of storing a queue of threads, each thread including a unique thread identifier, and a next thread value to facilitate access to the next thread in the queue; a lock, having a multi-state value, including: a flag value, a first thread value, and a last thread value; and a lock acquirer, which is capable of performing an acquisition of the lock via asynchronously querying the current state of the lock; speculatively determining the next state of the lock; and <Desc/Clms Page number 30> attempting to transition the lock from the queried current state to the speculatively determined next state. 45: The system of claim 44, wherein the lock acquirer is further capable of performing two general actions, including acquiring the lock, trying to acquire the lock; and wherein, if the state transition fails and the general action is acquiring the lock, the lock acquirer is further capable of, repeating, until the state transaction succeeds: asynchronously querying the current state of the lock; speculatively determining the next state of the lock; and attempting to transition the lock from the queried current state to the speculatively determined next state. 46: The system of claim 45, wherein, if the state transition fails and the general action is trying to acquire the lock, the lock acquirer is further capable of, indicating that the lock was unable to be acquired. 47: The system of claim 46, wherein, if the state transition succeeds and the general action is trying to acquire the lock, the lock acquirer is further capable of, indicating that the lock was acquired. <Desc/Clms Page number 31> 48: The system of claim 44, wherein, if the state transition succeeds, the general action is acquire the lock, and the speculatively determined next state represents the acquisition of the lock, the lock acquirer is further capable of, indicating that the lock was acquired. 49: The system of claim 48, wherein, if the state transition fails, the general action is acquire the lock, and the speculatively determined next state does not represent the acquisition of the lock, the lock acquirer is further capable of, adding the thread to the end of the queue of threads waiting to acquire the lock; waiting to receive notification that the thread may acquire the lock; and indicating the acquisition of the lock. 50: The system of claim 49, wherein the lock acquirer is unable to timeout of fail if the selected general action is acquiring the lock. 51 : A system comprising: a memory element, capable of storing a queue of threads, each thread including <Desc/Clms Page number 32> a unique thread identifier, and a next thread value to facilitate access to the next thread in the queue; a lock, having a multi-state value, including: a flag value, a first thread value, and a last thread value; and a lock releaser, which is capable of releasing a hold on the lock via asynchronously querying the current state of the lock; speculatively determining the next state of the lock ; and attempting to transition the lock from the queried current state to the speculatively determined next state. 52: The system of claim 51, wherein, if the state transition fails, the lock releaser is further capable of, repeating, until the state transaction succeeds: asynchronously querying the current state of the lock; speculatively determining the next state of the lock ; and attempting to transition the lock from the queried current state to the speculatively determined next state. 53: The system of claim 51, wherein, if the state transition succeeds, the lock releaser is further capable of determining the number of threads in the queue of threads waiting to acquire the lock utilizing the speculatively determined next state of the lock. <Desc/Clms Page number 33> 54: The system of claim 53, wherein, if the queue includes at least a first thread, the lock releaser is further capable of : removing the first thread from the queue; and notifying the first thread that the first thread has acquired the lock. 55: The system of claim 51, wherein the lock is capable of changing state in between the time the lock releaser asynchronously queries the current state of the lock; and attempts to transition the lock from the queried current state to the speculatively determined next state. |
<Desc/Clms Page number 1> LOW-CONTENTION LOCK BACKGROUND 1. Field The present disclosure relates to acquiring and releasing a shared resource via a lock semaphore and, more particularly, to acquiring and releasing a shared resource via a lock semaphore utilizing a state machine. 2. Background Information Typically, processing or computer systems allow multiple programs to execute substantially simultaneously. Multiple programs may execute substantially simultaneously utilizing techniques such as, for example, time slicing, parallel execution or multiple processing engines. Furthermore, it is possible for multiple parts of a program, or threads, to execute substantially simultaneously in much the same manner. Techniques that allow for this substantially simultaneous execution are often referred to as being multi-tasking, multi-threading or hyper-threading. An example of a multi-tasking technique may allow for a music player and a word processor to be run substantially simultaneously, so a user could listen to music while they write a document. An example of a multi-threading technique may be a word processor that allowed editing of a document while simultaneously printing the same document. These threads, processes, or programs, hereafter, collectively referred to as "threads, "often access shared resources. These shared resources may include physical hardware or other sections of executable instructions, such as, for example, a common library. These shared resources may not be capable of being substantially simultaneously utilized by multiple threads. For example, it is not common for a printer to print two or <Desc/Clms Page number 2> more documents simultaneously; however, in a multi-threaded environment two or more threads may attempt to simultaneously print to the printer. Of course, this is merely one example of a shared resource that may be incapable of being substantially simultaneously utilized by multiple threads. To prevent errors or other undesirable effects that may occur when multiple threads attempt to simultaneously use a shared resource a variety of techniques are known. In one technique, thread access to a shared resource may be governed by a semaphore lock, hereafter,"lock."In this context, a lock is a signal or a flag variable used to govern access to shared system resources. A lock often indicates to other potential users or threads that a file or other resource is in use and prevents access by more than one user or thread. In the printer example above, a first thread may acquire a lock on the printer, print the document, and release the lock on the printer. The second thread may attempt to acquire the printer's lock. Upon finding the printer is already locked by the first thread, the second thread often waits to acquire the lock. When the first thread releases the printer lock, the second thread may then acquire the printer lock, print the second document, and release the lock on the printer. In this example, contention for access to the printer is governed. Often it is possible for a single thread to hold multiple locks at a given time. Using traditional techniques, when a thread holds multiple locks at the same time, the associated dynamic memory allocation and deallocation is often proportional to the sum of the number of locks, the number of threads, and the number of lock acquisitions. In modern systems, this resulting number is often quite large. In addition, frequent memory allocations and dealloctions may consume a large amount of processing time and other <Desc/Clms Page number 3> system resources. A need, therefore, exists for an improved system or technique for implementing the acquiring and releasing of a shared resource via a lock semaphore. BRIEF DESCRIPTION OF THE DRAWINGSSubject matter is particularly pointed out and distinctly claimed in the concluding portions of the specification. The disclosed subject matter, however, both as to organization and the method of operation, together with objects, features and advantages thereof, may be best understood by a reference to the following detailed description when read with the accompanying drawings in which:FIG. 1 is a flowchart illustrating an embodiment of a technique for acquiring and/or releasing a lock in accordance with the disclosed subject matter;FIG. 2 is a flowchart illustrating an embodiment of a technique for acquiring and/or releasing a lock in accordance with the disclosed subject matter;FIG. 3 is a state diagram illustrating an embodiment of a state machine utilized by a technique for acquiring and/or releasing a lock in accordance with the disclosed subject matter;FIG. 4 is a table detailing the possible states of a state machine utilized within an embodiment of a technique for acquiring and/or releasing a lock in accordance with the disclosed subject matter; andFIG. 5 is a block diagram illustrating an embodiment of an apparatus and a system that allows for acquisition and release of a lock in accordance with the disclosed subject matter. <Desc/Clms Page number 4> DETAILED DESCRIPTION In the following detailed description, numerous details are set forth in order to provide a thorough understanding of the present disclosed subject matter. However, it will be understood by those skilled in the art that the disclosed subject matter may be practiced without these specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail so as not to obscure the disclosed subject matter. FIG. 1 is a flowchart illustrating an embodiment of a technique for acquiring and/or releasing a lock in accordance with the disclosed subject matter. Block 110 illustrates that a requesting thread, or agent associated with the thread, may select an action to perform upon the lock. In one embodiment, the action may be selected from a group of actions including: acquiring the lock, trying to acquire the lock, or releasing the lock. Block 120 illustrates that the current state of the lock may be asynchronously queried. In one embodiment, the lock may utilize a state machine with four valid states, such as, for example the state machine shown in FIG. 3. FIG. 3 is a state diagram illustrating an embodiment of a state machine utilized by a technique for acquiring and/or releasing a lock in accordance with the disclosed subject matter. One embodiment of the state machine may include four valid states. The embodiment may also involve a lock that includes a flag value, a pointer to a first thread in a queue of threads waiting to acquire the lock, and a pointer to a last thread in the queue to acquire the lock. In one embodiment, the flag value may indicate both whether or not the lock is being held and if there is a queue of threads waiting to acquire the lock. In other embodiments, the flag value may only indicate whether or not the lock is being held, and another value may indicate if a queue exists. It is also contemplated that the existence of <Desc/Clms Page number 5> the queue may be determined by the first and last thread pointers. It is further contemplated that the"pointer"to the threads may be an address value, a unique thread identifier or some other value that would facilitate access to the threads. In the embodiments illustrated by FIGs. 3 and 4, the state of the lock may be determined by the flag value and thread pointers; however, other techniques for determining the lock state are contemplated. State 310 may indicate that no thread holds the lock and there are no threads waiting in a queue to acquire the lock. In the illustrated embodiment, the flag value, the first thread pointer, and the last thread pointer may all be set to zero. However, it is contemplated that other values may be used in other embodiments to represent this un- acquired state. In this embodiment, state 310 may be the initial state of the lock. Also in this embodiment, the lock, since it is not held, may not be released. In one embodiment, an attempt to release the lock may result in an error. Furthermore, in this embodiment, the lock may only be acquired (via either an acquire or try action). The act of acquiring the lock may move the lock to state 320. However, other embodiments are contemplated. State 320 may indicate that a thread holds the lock and no threads are waiting in the queue. In the illustrated embodiment, the flag value may be set to one, and the first and last thread pointers may be set to zero. However, it is contemplated that other values may be used in other embodiments to represent this acquired state. In this embodiment, if the lock is released, the lock may return to state 310. Also, in this embodiment, if the lock is acquired, the lock may move to state 330. However, other embodiments are contemplated. State 330 may indicate that a thread holds the lock and one thread is waiting in the queue. In the illustrated embodiment, the flag value may be set to two, and the first and last thread pointers may point to the same thread. This thread is represented in FIGs. 3 & <Desc/Clms Page number 6> 4 as"H"which stands for the thread at the head of the queue. However, it is contemplated that other values may be used in other embodiments to represent this state. In this embodiment, if the lock is released, the lock may return to state 320. Also, in this embodiment, if the lock is acquired, the lock may move to state 340. However, other embodiments are contemplated. State 340 may indicate that a thread holds the lock and that more than one thread is waiting in the queue. In the illustrated embodiment, the flag value may be set to two, and the first and last thread pointers may point to the different threads. The last thread is represented in FIGs. 3 & 4 as"T"which stands for the thread at the tail of the queue. However, it is contemplated that other values may be used in other embodiments to represent this state. In this embodiment, if the lock is released, the lock may return to either state 330 or remain at state 340, depending upon whether releasing the lock changes the queue length to one. Also, in this embodiment, if the lock is acquired, the lock may remain at state 340. It is contemplated that when the lock remains at state 340 after a release or acquire action, that either the first or last thread pointer may be changed to represent the performed action. However, other embodiments are contemplated. FIG. 4 is a table detailing the possible states of a state machine utilized within an embodiment of a technique for acquiring and/or releasing a lock in accordance with the disclosed subject matter. It provides a summary of the state machine illustrated in FIG. 3. Row 410 summarizes state 310. Row 420 summarizes state 320. Row 430 summarizes state 330. Row 440 summarizes state 340. However, FIGs. 3 & 4 merely illustrate one embodiment of the disclosed subject matter and other embodiments are contemplated. Returning to FIG. 1, block 130 illustrates that once the current state of the lock has been determined, in block 120, the lock's next state may be speculatively determined. In one illustrative example, the thread may request that the lock be acquired. The current <Desc/Clms Page number 7> state of the lock may show that the lock is not presently held. Therefore, utilizing the embodiment of the state machine illustrated by FIG. 3, the next state of the lock, after the requested"acquire"action is performed, may be speculatively determined to be a state that represents the lock being held, but having no threads in a queue waiting to acquire the lock. Block 140 illustrates that an attempt to transition the lock to the next state may be made. It is contemplated that this, and possibly any, alteration of the lock's state may be done via a technique that attempts to minimize the occurrence of race conditions and other undesirable thread related effects. A race condition is often defined as an undesirable situation that occurs when a device or system attempts to perform two or more operations at substantially the same time, but because of the nature of the device or system, the operations must be done in the proper sequence in order to be done correctly. In one embodiment, the alteration may be performed by a"compare-and-store"operation (a. k. a. a "test-and-set"operation) that confirms that a variable is equal to an expected value before allowing the variable to be set to a new value. However, these are merely a few non- limiting examples to which the disclosed subject matter is not limited by. Block 150 illustrates that the state transition may not be, in some embodiments, successful. It is contemplated that, in one embodiment, the state of the lock may change between block 120 and block 140. It is also contemplated that the thread or performer of the technique may not be aware of this change in state before the transition is attempted. In one illustrative example, a second thread may alter the state of the lock between blocks 120 and 140. This may cause the attempted state transition to fail. It is contemplated that transition may fail if such a change, for example, has occurred. However, other possible failures are contemplated. <Desc/Clms Page number 8> Block 160 illustrates that, if the state transition of block 140 failed, the selected action may be examined. The selected or requested action of block 110 may be, in one embodiment: try to acquire the lock, acquire the lock, or release the lock. However, other actions are within the scope of the disclosed subject matter. Block 170 illustrates that, in one embodiment, if the selected action was to merely try to acquire the lock and the state transition failed, the technique may indicate to the requesting thread that the lock was not acquired. Conversely, in one embodiment, if the selected action was"acquire"or"release, "FIG. 1 illustrates that the technique may repeat blocks 120,130, and 140, until the lock has successfully transitioned state. FIG. 2 is a flowchart illustrating an embodiment of a technique for acquiring and/or releasing a lock in accordance with the disclosed subject matter. FIG. 2 is an extension of FIG. 1 that details an embodiment of a technique that may be employed if the state transition of block 140 of FIG. 1 is successful. Block 210 of FIG. 2 illustrates that different events may transpire depending upon the action selected (see block 110 of FIG. 1). If the selected action was"acquire"or"try," block 220 illustrates that different actions may be performed based upon whether or not the lock was acquired. In one illustrative embodiment, acquiring the lock may be synonymous with transitioning the lock into state 310 of FIG. 3. However, this is merely one illustrative embodiment, and other embodiments are contemplated. Block 230 illustrates, if the lock was acquired, that an indication that the lock was acquired may be made to the thread. In one embodiment, this indication may include deselecting (or setting to the"false"state) a spin flag in the thread. This spin flag may have prevented execution of the thread while it waited on the acquisition of the lock. However, it is contemplated that other forms of indication are possible and that this is <Desc/Clms Page number 9> merely one illustrative example. It is also contemplated that the indication may only be made in certain embodiments of the disclosed subject matter. Block 250 illustrates that, if the lock was not acquired and the selected action was "acquire, "the thread requesting the lock may be added to a queue of threads waiting to acquire the lock. In one embodiment, the thread may simply be added to the end or tail of the queue. However, it is contemplated that other schemes may be used to prioritize access to the lock. In one illustrative embodiment, the added thread may be the first and only thread in the queue. For example, the lock may be transitioned from state 320 of FIG. 3 to state 330. In this case, adding the thread to the queue may include setting the flag value of the lock to two, and placing a pointer (or some other value to facilitate access) to the thread in the first and last thread pointer values of the lock. However, this is merely one highly specific embodiment of the disclosed subject matter and other embodiments are contemplated. In a second illustrative embodiment, the added thread may be the second thread in the queue. For example, the lock may be transitioned from state 330 to state 340. In this case, adding the thread to the queue may include not changing the flag value or the first thread pointer of the lock, and placing a pointer (or some other value to facilitate access) to the thread in the last thread pointer value of the lock. However, this is merely one highly specific embodiment of the disclosed subject matter and other embodiments are contemplated. In a third illustrative embodiment, the added thread may be the third or higher thread in the queue. For example, the lock may be transitioned from a previous state 340 to new state 340. In this case, adding the thread to the queue may include not changing the flag value or the first thread pointer of the lock, but placing a pointer (or some other <Desc/Clms Page number 10> value to facilitate access) to the thread in the last thread pointer value of the lock. This new last (or"tail") thread pointer would replace the previous last thread pointer. However, this is merely one highly specific embodiment of the disclosed subject matter and other embodiments are contemplated. Block 255 of FIG. 2 illustrates that the now queued thread may wait to receive notification that the lock is acquired. It is contemplated that in one embodiment, the thread may await notification that the lock is available to be acquired. In one embodiment, the thread may be prevented from executing while waiting. In another embodiment the thread may continue to execute a portion of the thread that does not need or desire access to the resource controlled by the lock. Block 260 illustrates that, if the selected action was to release the lock, the number of threads in or, in another embodiment, the existence of a queue of threads waiting to acquire the lock may be determined. In one embodiment, the approximate size of the queue may be determined by a flag value associated with the lock. In another embodiment, the existence or depth of the queue may be determined by comparing the first and last queued thread pointers. However, these are merely two illustrative examples and it is contemplated that other schemes for determining the existence or depth of a queue may be used. Block 270 illustrates that if no queue exists, the lock may be released. In one embodiment, illustrated by FIG. 3, this may involve transitioning the lock from state 320 to state 310. In this embodiment, block 270 of FIG. 2 may be synonymous with block 140 of FIG. 1. However, it is contemplated that other embodiments may include a more involved releasing mechanism, such as for example, a pre-defined return value or centralized status mechanism. These are merely a few non-limiting embodiments. <Desc/Clms Page number 11> Block 280 of FIG. 2 illustrates that if a queue does exist, the first thread in the queue may be identified or accessed. In one embodiment, this may involve utilizing the pointer value associated with the first thread pointer value of the lock. However, this is merely one illustrative embodiments and other embodiments are contemplated. Block 283 illustrates that the first thread may be removed from the queue. In one embodiment this may include editing both the state of the lock and the de-queued thread. However, other schemes for de-queuing the thread are contemplated. Three highly specific embodiments are described below; however, these are merely a few non-limiting examples. In one illustrative embodiment, the de-queued thread may be the first and only thread in the queue. For example, the lock may be transitioned from state 330 of FIG. 3 to state 320. In this case, removing the thread from the queue may include setting the flag value of the lock to one, and setting the first and last thread pointer values to zero. However, this is merely one highly specific embodiment of the disclosed subject matter and other embodiments are contemplated. In a second illustrative embodiment, the queue may only include two threads. For example, the lock may be transitioned from state 340 to state 330. In this case, removing the thread from the queue may include not changing the flag value or the last thread pointer of the lock, while placing a pointer (or some other value to facilitate access) to the second queued thread in the first thread pointer value of the lock. In one embodiment, the first thread may include a"next thread"value that includes a pointer to the next thread in the queue. This next thread value may be accessed to determine the proper value to set the new first thread pointer in the lock. However, this is merely one highly specific embodiment of the disclosed subject matter and other embodiments are contemplated. <Desc/Clms Page number 12> In a third illustrative embodiment, the queue may include more than two threads. For example, the lock may be transitioned from a previous state 340 to a new state 340. In this embodiment the actions may be identical to the second illustrative embodiment. Unlike the second embodiment, where the first and last thread pointers ultimately contained the same value in state 330, this embodiment would result in the first and last thread pointers containing different values in state 340. However, this is merely one highly specific embodiment of the disclosed subject matter and other embodiments are contemplated. Block 286 of FIG. 2 illustrates that the de-queued thread may be notified that it has acquired the lock. In one embodiment, this indication may include deselecting (or setting to the"false"state) a spin flag in the thread. This spin flag may have prevented execution of the thread while it waited on the acquisition of the lock. However, it is contemplated that other forms of indication are possible and that this is merely one illustrative example. It is also contemplated that the indication may only be made in certain embodiments of the disclosed subject matter. It is also contemplated that in one embodiment, some, if not all, of the actions illustrated in FIG. 2 may be included as part of blocks 140 & 150 of FIG. 1. In this embodiment, blocks 140 & 150 may be implemented as an atomic action, such as, for example, a"compare-and-store"operation (a. k. a. a"test-and-set"operation) that confirms that a variable is equal to an expected value before changing the variable to be set to a new value. FIG. 5 is a block diagram illustrating an embodiment of an apparatus 501 and a system 500 that allows for acquisition and release of a lock 510 in accordance with the disclosed subject matter. In one embodiment, the lock 510 may include a state value 520. The state value may include a flag value 523 to indicate whether or not the lock is <Desc/Clms Page number 13> currently held and/or the approximate length of a queue of threads 550 waiting to acquire the lock, a first thread value 525 to facilitate access to a first thread 560, and/or a last thread value 528 to facilitate access to a last thread 580. In one embodiment, the system 500 may include a queue of threads 550 that are waiting to acquire the lock 510. While FIG. 5 illustrates a queue having at least four threads and queue having zero or more threads is within the scope of the disclosed subject matter. The queue may include a first thread 560, a last thread 580, a second thread 570, and a plurality of other threads 590. In one embodiment, for example if the queue includes only one thread, the first and last thread may be identical. In one embodiment, each thread in the queue may include a wait value 593 that indicates that the thread is waiting to acquire the lock, and/or a next thread value 597 that facilitates access to the next thread in the queue. However, it is contemplated that other state and memory structures may be utilized by the threads. In one embodiment, the apparatus 501 and system 500 may include a lock acquirer 530 to facilitate acquiring the lock 510. In one embodiment, the lock acquirer may be capable of performing all or part of the technique illustrated by FIGs. 1 & 2 and described above. In another example, the lock acquirer may be capable of determining if the lock is held. If so, the lock acquirer may place a requesting thread within the queue 550. It is contemplated that the requesting thread may be placed at the end of the queue, or in the front or middle of the queue if a prioritized queue scheme is used. However, these are merely a few non-limiting examples of embodiments within the scope of the disclosed subject matter. In one embodiment, the apparatus 501 and system 500 may include a lock releaser 540 to facilitate releasing the lock 510. In one embodiment, the lock releaser may be capable of performing all or part of the technique illustrated by FIGs. 1 & 2 and described <Desc/Clms Page number 14> above. In another example, the lock releaser may be capable of determining if a queue 550 exists or is empty. If the queue exists, the lock acquirer may remove the first thread 560 from the queue and move the second thread 570 to the first position in the queue. The lock releaser may then notify the first thread 560 that the lock is available. It is contemplated that, in one embodiment, the lock releaser may use the next thread value 597 to access the second thread and the wait value 593 to notify the first thread that the lock is available. However, these are merely a few non-limiting examples of embodiments within the scope of the disclosed subject matter. In one embodiment, the apparatus 501 and system 500 may be capable of limiting the dynamic memory allocation and deallocations to a number substantially related or proportional to the sum of the number of locks, and the number of threads. It is further contemplated that alterations of the lock's state 520 or the thread's values 593 & 597 may be done via a technique that attempts to minimize the occurrence of race conditions and other undesirable thread related effects. In one embodiment, the alteration may be performed by a"compare-and-store"operation (a. k. a. a"test-and-set"operation) that confirms that a variable is equal to an expected value before allowing the variable to be set to a new value. It is also contemplated that a thread's wait value 593 and next thread value 597 may be stored within a memory and within separate cache lines of that memory. In another embodiment, the lock's queue length value 523 and last thread value 528 may be stored within the same cache line of a memory. The lock's first thread value 525 and a duplicate or shadowed version of the last thread value may be stored within a second memory cache line. However, these are merely a few specific embodiments of the disclosed subject matter and other embodiments are possible and contemplated. The techniques described herein are not limited to any particular hardware or software configuration; they may find applicability in any computing or processing <Desc/Clms Page number 15> environment. The techniques may be implemented in hardware, software, firmware or a combination thereof. The techniques may be implemented in programs executing on programmable machines such as mobile or stationary computers, personal digital assistants, and similar devices that each include a processor, a storage medium readable or accessible by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and one or more output devices. Program code is applied to the data entered using the input device to perform the functions described and to generate output information. The output information may be applied to one or more output devices. Each program may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. However, programs may be implemented in assembly or machine language, if desired. In any case, the language may be compiled or interpreted. Each such program may be stored on a storage medium or device, e. g. compact read only memory (CD-ROM), digital versatile disk (DVD), hard disk, firmware, non- volatile memory, magnetic disk or similar medium or device, that is readable by a general or special purpose programmable machine for configuring and operating the machine when the storage medium or device is read by the computer to perform the procedures described herein. The system may also be considered to be implemented as a machine- readable or accessible storage medium, configured with a program, where the storage medium so configured causes a machine to operate in a specific manner. Other embodiments are within the scope of the following claims. While certain features of the disclosed subject matter have been illustrated and described herein, many modifications, substitutions, changes, and equivalents will now occur to those skilled in the art. It is, therefore, to be understood that the appended claims <Desc/Clms Page number 16> are intended to cover all such modifications and changes that fall within the true spirit of the disclosed subject matter. |
An indication of a power loss can be received at a cross point array memory dual in- line memory module (DIMM) operation component of a memory sub-system. The cross point array memoryDIMM operation component includes a volatile memory component and a non-volatile cross point array memory component. In response to receiving the indication of the power loss, a type of write operation for the non-volatile cross point array memory component of the cross point array memory DIMM operation component is determined based on a characteristic of the memory sub-system. Data stored at the volatile memory component of the cross point array memory DIMM operation component is retrieved and written to the non-volatile cross point array memory component of the cross point array memory DIMM operation component by using the determined type of write operation. |
CLAIMSWhat is claimed is:1. A system comprising:a volatile memory component;a non-volatile cross point array memory component; anda processing device, operatively coupled with the volatile memory component and the non-volatile cross point array memory component, to:receive an indication of a power loss to the system;determine a characteristic of the system;in response to receiving the indication of the power loss to the system, determine, based on the characteristic of the system, a type of write operation from a plurality of types of write operations for the non-volatile cross point array memory component;retrieve data stored at the volatile memory component; andwrite the retrieved data to the non-volatile cross point array memory component by using the determined type of write operation.2. The system of claim 1, wherein the plurality of types of write operations for the non volatile cross point array memory component comprises:a pre-scan write operation that writes the retrieved data to the non-volatile cross point array memory component based on a comparison between data blocks of the retrieved data and other data blocks stored at the non-volatile cross point array memory component; and a force write operation that writes each data block of the retrieved data stored in the volatile memory component to the non-volatile cross point array memory component.3. The system of claim 1, wherein the characteristic of the system corresponds to an energy level of a backup power source that provides a backup power to the system responsive to the power loss, and wherein the processing device is further to:determine whether the energy level of the backup power source satisfies an energy level threshold, and wherein determining the type of write operation is based on the determination of whether the energy level of the backup power source satisfies the energy level threshold, and wherein the energy level threshold is based on a particular energy level that is sufficient to write the retrieved data to the non-volatile cross point array memory component.4. The system of claim 1, wherein the characteristic of the system corresponds to an amount of the data stored in the volatile memory component to be written to the non-volatile cross point array memory component, and wherein the processing device is further to:determine whether the amount of the data stored in the volatile memory component to be written to the non-volatile cross point array memory component satisfies a data size threshold, wherein determining the type of write operation is based on the determination of whether the amount of the data satisfies the data size threshold, and wherein the data size threshold is based on an energy level of a backup power source for the system being sufficient to write the amount of the data to the non-volatile cross point array memory component.5. The system of claim 1, wherein the characteristic of the system corresponds to a classification of data blocks of the data, wherein the determined type of write operation for a particular data block of the data is based on a corresponding classification for each particular data block of the data.6. The system of claim 2, wherein to write data stored in the volatile memory component to the non-volatile cross point array memory component by using the pre-scan write operation, the processing device is further to:determine one or more data blocks of the non-volatile cross point array memory component where the data stored in the volatile memory component is to be written;compare data from each data block in the one or more data blocks of the non-volatile cross point array memory component with data from a corresponding data block of the data retrieved from the volatile memory component;determine a subset of the one or more data blocks of the non-volatile cross point array memory component having data different from data of a corresponding data block of the volatile memory component; andwrite, to the subset of the one or more data blocks of the non-volatile cross point array memory component, data of a corresponding data block stored in the volatile memory component.7. The system of claim 1, wherein the system is a dual in-line memory module (DIMM).8. A method comprising:receiving an indication of a power loss to a memory sub-system, the memory sub- system comprising a volatile memory component and a non-volatile cross point array memory component;determining a characteristic of the memory sub-system;in response to receiving the indication of the power loss to the memory sub-system, determining, based on the characteristic of the memory sub-system, a type of write operation from a plurality of types of write operations for the non-volatile cross point array memory component;retrieving data stored at the volatile memory component; andwriting, by a processing device, the retrieved data to the non-volatile cross point array memory component by using the determined type of write operation.9. The method of claim 8, wherein the plurality of types of write operations for the non volatile cross point array memory component comprises:a pre-scan write operation that writes the retrieved data to the non-volatile cross point array memory component based on a comparison between data blocks of the retrieved data and other data blocks stored at the non-volatile cross point array memory component; and a force write operation that writes each data block of the retrieved data stored in the volatile memory component to the non-volatile cross point array memory component.10. The method of claim 8, wherein the characteristic of the memory sub-system corresponds to an energy level of a backup power source that provides a backup power to the memory sub-system responsive to the power loss, and wherein the method further comprising:determining whether the energy level of the backup power source satisfies an energy level threshold; andwherein determining the type of write operation is based on the determination of whether the energy level of the backup power source satisfies the energy level threshold, wherein the energy level threshold is based on a particular energy level that is sufficient to write the retrieved data to the non-volatile cross point array memory component.11. The method of claim 8, wherein the characteristic of the memory sub-system corresponds to an amount of the data stored in the volatile memory component to be written to the non-volatile cross point array memory component, and wherein the method further comprising:determining whether the amount of the data stored in the volatile memory component to be written to the non-volatile cross point array memory component satisfies a data size threshold; andwherein determining the type of write operation is based on the determination of whether the amount of the data satisfies the data size threshold, wherein the data size threshold is based on an energy level of a backup power source for the memory sub-system being sufficient to write the amount of the data to the non-volatile cross point array memory component.12. The method of claim 8, wherein the characteristic of the memory sub-system corresponds to a classification of data blocks of the data, wherein the determined type of write operation for a particular data block of the data is based on a correspondingclassification for each particular data block of the data.13. The method of claim 9, wherein writing the data stored in the volatile memory component to the non-volatile cross point array memory component by using the pre-scan write operation comprises:determining one or more data blocks of the non-volatile cross point array memory component where the data stored in the volatile memory component is to be written;comparing data from each data block in the one or more data blocks of the non-volatile cross point array memory component with data from a corresponding data block of the data retrieved from the volatile memory component;determining a subset of the one or more data blocks of the non-volatile cross point array memory component having data different from data of a corresponding data block of the volatile memory component; andwriting, to the subset of the one or more data blocks of the non-volatile cross point array memory component, data of a corresponding data block stored in the volatile memory component.14. The method of claim 8, wherein the memory sub-system is a dual in-line memory module (DIMM).15. A system comprising:a volatile memory component;a non-volatile cross point array memory component; anda processing device, operatively coupled with the volatile memory component and the non-volatile cross point array memory component, to:receive an indication of a return of power to the system;in response to receiving the indication of the return of power, retrieve data from the non-volatile cross point array memory component and write the data from the non-volatile cross point array memory component to the volatile memory component; andin response to writing the data from the non-volatile cross point array memory component to the volatile memory component, write a same data value to one or more data blocks of the non-volatile cross point array memory component that stored the data written to the volatile memory component.16. The system of claim 15, wherein the processing device is further to, in response to writing the data from the non-volatile cross point array memory component to the volatile memory component, write the same data value to another one or more data blocks of the non volatile cross point array memory component that do not store data written to the volatile memory component.17. The system of claim 15, wherein the processing device is further to:receive an indication of a power loss to the system;in response to receiving the indication of the power loss and writing the same data value to the one or more data blocks of the non-volatile cross point array memory component that stored the data written to the volatile memory component, determine to not write a particular data block of the volatile memory component matching a corresponding data block in the non-volatile cross point array memory component to the non-volatile cross point array memory component; and detennine to write another data block of the volatile memory component not matching a corresponding data block of the non-volatile cross point array memory component to the non-volatile cross point array memory component.18. The system of claim 15, wherein the processing device is further to:write one or more data blocks of the volatile memory component to the non-volatile cross point array memory component while the power is supplied to the system; andin response to receiving an indication of a power loss to the system and writing the same data value to the one or more data blocks of the non-volatile cross point array memory component that stored data written to the volatile memory component:determine one or more data blocks of the volatile memory component that have not been written to the non-volatile cross point array memory component,determine to not write, to the non-volatile cross point array memory component, a subset of the determined one or more data blocks of the volatile memory component that match a corresponding data block in the non-volatile cross point array memory component, andwrite, to the non-volatile cross point array memory component, another subset of the determined one or more data blocks of the volatile memory component that do not match a corresponding data block in the non-volatile cross point array memory component.19. The system of claim 15, wherein the same data value corresponds to a zero value.20. The system of claim 15, wherein the system is a dual in-line memory module (DIMM). |
CROSS POINT ARRAY MEMORY IN A NON-VOLATILE DUAL IN-LINEMEMORY MODULETECHNICAL FIELD[001] The present disclosure generally relates to memory sub-systems, and more specifically, relates to a cross point array memory in a non-volatile dual in-line memory module (DIMM).BACKGROUND[002] A memory sub-system can be a storage system, such as a solid-state drive (SSD), or a hard disk drive (HDD). A memory sub-system can be a memory module, such as a dual in-line memory module (DIMM), a small outline DIMM (SO-DIMM), or a non-volatile dual in-line memory module (NVDIMM). A memory sub-system can include one or more memory components that store data. The memory components can be, for example, non volatile memory components and volatile memory components. In general, a host system can utilize a memory sub-system to store data at the memory components and to retrieve data from the memory components.BRIEF DESCRIPTION OF THE DRAWINGS[003] The present disclosure will be understood more fully from the detailed description given below and from the accompanying drawings of various implementations of the disclosure.[004] FIG. 1 illustrates an example computing environment that includes a memory sub system in accordance with some embodiments of the present disclosure.[005] FIG. 2A illustrates example architecture of a dual in-line memory module (DIMM) with a cross point array memory in accordance with some embodiments of the present disclosure.[006] FIG. 2B illustrates example architecture of the DIMM with a cross point array memory in accordance with some other embodiments of the present disclosure.[007] FIG. 3 is a flow diagram of an example method to perform a save operation in accordance with some embodiments of the present disclosure.[008] FIG. 4 is a flow diagram of an example method to perform a restore operation in accordance with some embodiments of the present disclosure.[009] FIG. 5 is a flow diagram of an example method to perform a pre-save operation in accordance with some embodiments of the present disclosure. [0010] FIG. 6 is a block diagram of an example computer system in which implementations of the present disclosure can operate.DETAILED DESCRIPTION[0011] Aspects of the present disclosure are directed to a memory sub-system that includes a cross point array memory in a non-volatile dual in-line memory module (DIMM). A memory sub-system is also hereinafter referred to as a“memory device.” An example of a memory sub-system is a storage device that is coupled to a central processing unit (CPU) via a peripheral interconnect (e.g., an input/output bus, a storage area network). Examples of storage devices include a solid-state drive (SSD), a flash drive, a universal serial bus (USB) flash drive, and a hard disk drive (HDD). Another example of a memory sub-system is a memory module that is coupled to the CPU via a memory bus. Examples of memory modules include a dual in-line memory module (DIMM), a small outline DIMM (SO-DIMM), a non volatile dual in-line memory module (NVDIMM), etc. In some embodiments, the memory sub-system can be a hybrid memory/storage sub-system. In general, a host system can utilize a memory sub-system that includes one or more memory components. The host system can provide data to be stored at the memory sub-system and can request data to be retrieved from the memory sub-system. For example, the host system can utilize the DIMM of the memory sub-system as a cache memory.[0012] A conventional DIMM can include static dynamic random access memory(SDRAM) that is used to store the data that is accessed by the host system. The SDRAM can be a volatile memory. As a result, when the conventional DIMM suffers a loss of power or a condition that results in a loss of power for an amount of time (e.g., during a restart), then the data stored at the SDRAM can be lost. Accordingly, the conventional DIMM can include a flash memory to store data at the SDRAM in the event of a power loss. When the DIMM loses power, the data at the SDRAM can be stored at the flash memory. For example, write operations can be performed to write the data at the SDRAM to the flash memory. Since the flash memory is a non-volatile memory, then the data can remain stored at the flash memory when the loss of power is experienced by the DIMM. Subsequently, when the power is returned to the DIMM, the data stored at the flash memory can be written back to the SDRAM for use by the host system.[0013] The use of a flash memory to store the data from the SDRAM of the DIMM in the event of a power loss can take a prolonged amount of time to write the data to the flash memory to copy the data from the SDRAM because of inherent drawbacks of the flash memory. As a result, when the loss of power is experienced, there may not be enough time to save data stored at the SDRAM to the flash memory. Similarly, when power is restored to the DIMM, there can be an additional downtime for the host system while data is being restored to the SDRAM from the flash memory. Additionally, the performance of a write operation and a read operation for the flash memory can utilize a larger amount of energy or power to read or write data from the flash memory. Therefore, a larger backup power source needs to be used with the conventional DIMM that includes flash memory as the non-volatile memory. Furthermore, the flash memory can have a more limited endurance. For example, a particular number of write operations and/or read operations can be performed at the flash memory before data stored at the flash memory can no longer be reliably stored at the flash memory. Thus, the life of a DIMM that includes a flash memory can also be limited by the endurance of the flash memory.[0014] Aspects of the present disclosure address the above and other deficiencies by using a cross point array memory in a DIMM. In some embodiments, the cross point array memory can perform bit storage based on a change of bulk resistance, in conjunction with a stackable cross-gridded data access array. For example, a bit of‘O’ or‘ G can be determined based on a resistance value of a particular memory cell of the cross point array memory. The cross point array memory can be a non-volatile memory included in the DIMM that is used to store data from the SDRAM of the DIMM in the event of a loss of power. For example, the host system can provide signals to the DIMM that includes the cross point array memory. The host system can transmit a save command to the DIMM when an indication of a power loss is received or is expected. In response, the DIMM can perform a save operation to retrieve data from the SDRAM and store the data at the cross point array memory. Furthermore, when the power is returned to the DIMM, a restore operation can be performed to retrieve the data from the cross point array memory and store the data at the SDRAM of the DIMM. After the data is restored to the SDRAM, the DIMM can prepare the cross point array memory for the next save operation by resetting data values at locations that previously stored the data.[0015] Advantages of the present disclosure include, but are not limited to, a reduction in the amount of time to store data from the SDRAM to the cross point array memory and an amount of time to restore the data from the cross point array memory to the SDRAM. As such, a host system that is associated with the DIMM having a cross point array memory can operate on data stored at the DIMM in less time when power is returned to the DIMM.Additionally, since the performance of read operations and write operations on cross point array memory can utilize less energy, a smaller backup energy or power source can be used with or in the DIMM. Furthermore, the cross point array memory can have a higher endurance and can thus store data from more write operations without the data becoming unreliable. As such, a DIMM that uses the cross point array memory can have a longer lifespan or time in use.[0016] Fig. 1 illustrates an example computing environment 100 that includes a memory sub-system 110 in accordance with some embodiments of the present disclosure. The memory sub-system 110 can include media, such as memory components 112A to 112N. The memory components 112A to 112N can be volatile memory components, non-volatile memory components, or a combination of such. In some embodiments, the memory sub- system is a storage system. An example of a storage system is a SSD. In some embodiments, the memory sub-system 110 is a hybrid memory/storage sub-system. In general, the computing environment 100 can include a host system 120 that uses the memory sub-system 110. For example, the host system 120 can write data to the memory sub-system 110 and read data from the memory sub-system 110.[0017] The host system 120 can be a computing device such as a desktop computer, laptop computer, network server, mobile device, or such computing device that includes a memory and a processing device. The host system 120 can include or be coupled to the memory sub -system 110 so that the host system 120 can read data from or write data to the memory sub-system 110. The host system 120 can be coupled to the memory sub-system 110 via a physical host interface. As used herein,“coupled to” generally refers to a connection between components, which can be an indirect communicative connection or direct communicative connection (e.g., without intervening components), whether wired or wireless, including connections such as electrical, optical, magnetic, etc. Examples of a physical host interface include, but are not limited to, a serial advanced technology attachment (SATA) interface, a peripheral component interconnect express (PCIe) interface, universal serial bus (USB) interface, Fibre Channel, Serial Attached SCSI (SAS), etc. The physical host interface can be used to transmit data between the host system 120 and the memory sub -system 110. The host system 120 can further utilize an NVM Express (NVMe) interface to access the memory components 112A to 112N when the memory sub -system 110 is coupled with the host system 120 by the PCIe interface. The physical host interface can provide an interface for passing control, address, data, and other signals between the memory sub-system 110 and the host system 120.[0018] The memory components 112A to 112N can include any combination of the different types of non-volatile memory components and/or volatile memory components. An example of non-volatile memory components includes a negative-and (NAND) type flash memory. Each of the memory components 112A to 112N can include one or more arrays of memory cells such as single level cells (SLCs) or multi-level cells (MLCs) (e.g., triple level cells (TLCs) or quad-level cells (QLCs)). In some embodiments, a particular memory component can include both an SLC portion and a MLC portion of memory cells. Each of the memory cells can store one or more bits of data (e.g., data blocks) used by the host system 120. Although non-volatile memory components such as NAND type flash memory are described, the memory components 112A to 112N can be based on any other type of memory such as a volatile memory. In some embodiments, the memory components 112A to 112N can be, but are not limited to, random access memory (RAM), read-only memory (ROM), dynamic random access memory (DRAM), synchronous dynamic random access memory (SDRAM), phase change memory (PCM), magneto random access memory (MRAM), negative-or (NOR) flash memory, electrically erasable programmable read-only memory (EEPROM), and a cross-point array of non-volatile memory cells. A cross-point array of non volatile memory can perform bit storage based on a change of bulk resistance, in conjunction with a stackable cross-gridded data access array. Additionally, in contrast to many flash- based memories, cross-point non-volatile memory can perform a write in-place operation, where a non-volatile memory cell can be programmed without the non-volatile memory cell being previously erased. Furthermore, the memory cells of the memory components 112A to 112N can be grouped as memory pages or data blocks that can refer to a unit of the memory component used to store data.[0019] The memory system controller 115 (hereinafter referred to as“controller”) can communicate with the memory components 112A to 112N to perform operations such as reading data, writing data, or erasing data at the memory components 112A to 112N and other such operations. The controller 115 can include hardware such as one or more integrated circuits and/or discrete components, a buffer memory, or a combination thereof. The controller 115 can be a microcontroller, special purpose logic circuitry (e.g., a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), or other suitable processor. The controller 115 can include a processor (processing device) 117 configured to execute instructions stored in local memory 119. In the illustrated example, the local memory 119 of the controller 115 includes an embedded memory configured to store instructions for performing various processes, operations, logic flows, and routines that control operation of the memory sub-system 110, including handling communications between the memory sub-system 110 and the host system 120. In some embodiments, the local memory 119 can include memory registers storing memory pointers, fetched data, etc. The local memory 119 can also include read-only memory (ROM) for storing micro-code. While the example memory sub-system 110 in FIG. 1 has been illustrated as including the controller 115, in another embodiment of the present disclosure, a memory sub-system 110 may not include a controller 115, and may instead rely upon external control (e.g., provided by an external host, or by a processor or controller separate from the memory sub-system).[0020] In general, the controller 115 can receive commands or operations from the host system 120 and can convert the commands or operations into instructions or appropriate commands to achieve the desired access to the memory components 112A to 112N. The controller 115 can be responsible for other operations such as wear leveling operations, garbage collection operations, error detection and error-correcting code (ECC) operations, encryption operations, caching operations, and address translations between a logical block address and a physical block address that are associated with the memory components 112A to 112N. The controller 115 can further include host interface circuitry to communicate with the host system 120 via the physical host interface. The host interface circuitry can convert the commands received from the host system into command instructions to access the memory components 112A to 112N as well as convert responses associated with the memory components 112A to 112N into information for the host system 120.[0021] The memory sub-system 110 can also include additional circuitry or components that are not illustrated. In some embodiments, the memory sub-system 110 can include a cache or buffer (e.g., DRAM) and address circuitry (e.g., a row decoder and a column decoder) that can receive an address from the controller 115 and decode the address to access the memory components 112A to 112N.[0022] The memory sub-system 110 includes a cross point array memory DIMM operation component 113 (e.g., integrated circuitry with SDRAM and a cross point array memory). In some embodiments, the controller 115 includes at least a portion of the cross point array memory DIMM operation component 113. For example, the controller 115 can include a processor 117 (processing device) configured to execute instructions stored in local memory 119 for performing the operations described herein.[0023] The memory sub-system 110 receives power from a main power source.Additionally, the memory sub-system 110 includes or is connected to a backup power source in case of a power failure at the main power source. In response to the power failure, the cross point array memory DIMM operation component 113 of the memory sub-system 110 can receive an indication of a power loss resulted from the main power source. In response, the cross point array memory DIMM operation component 113 can save data. On the other hand, in response to detecting power recovery of the main power source, the cross point array memory DIMM operation component 113 can restore the saved data in response to power recovery. The cross point array memory DIMM operation component 113 can also pre-save data while normal power is supplied from the main power source in order to shorten the time required to save the data in case of the power failure. Further details with regards to the operations of the cross point array memory DIMM operation component 113 are described below.[0024] Fig. 2A is an example system architecture of the cross point array memory DIMM operation component 113 for a save operation in accordance with some embodiments of the present disclosure. As shown, the DIMM controller 230 can include a host system channel 251 that provides signals to the cross point array memory DIMM operation component 113. The DIMM controller 230 can be connected to a volatile memory component 210 (e.g., an SDRAM), a non-volatile cross point array memory component 220. The volatile memory component 210 stores data used or accessed by the host system 120. The non-volatile cross point array memory component 220 stores data from the volatile memory component 210 that can be lost, for example, due to a power loss. The volatile memory component 210 and the non-volatile cross point array memory component 220 can correspond to the media 112A to 112N in Fig. 1.[0025] The DIMM controller 230 monitors input signals from the host system channel 251. If the cross DIMM controller 230 detects a power loss signal or a save command, then a save operation can be performed by transferring data from the volatile memory component 210 to the non-volatile cross point array memory component 220. For example, the DIMM controller 230 can read data stored at the volatile memory component 210 via a volatile memory channel 255. The DIMM controller 230 can determine to save the data to the non volatile cross point array memory using either a pre-scan write operation or a force write operation through communication via a non-volatile memory channel 257A/257B. Details about the pre-scan write operation and the force write operation are described below with respect to operation 330 of Fig. 3.[0026] The DIMM controller 230 can retrieve a characteristic of the memory sub-system 110 using a non-volatile memory channel 253 in order to determine which write operation to use for the save operation. For example, the DIMM controller 230 can access, via the non volatile memory channel 253, a power source controller or a backup power source to obtain an energy level of the backup power source. The backup power source can be connected to the memory sub-system 110 and supply a backup power to the memory sub-system 110 during the save operation to be performed by the cross point array memory DIMM operation component 113, but not during a normal operation or other operations that are performed as the host system 120 utilizes data stored at the volatile memory component 210. For the normal operation, a main power source can provide power to the memory sub-system 110. The DIMM controller 230 can also access, via the non-volatile memory channel 253, a data store or the volatile memory component 210 that has metadata of data stored at the volatile memory component 210. The metadata can include information about the data of the volatile memory component 210, such as an amount of data stored at the volatile memory component 210 or a classification of the data (e.g., a type of priority) among other information of the data stored at the volatile memory component 210. The DIMM controller 230 can then determine which write operation to perform based on the characteristic of the memory sub-system 110 collected via the non-volatile memory channel 253.[0027] In some embodiments, the DIMM controller 230 can perform a pre-save operation for faster performance of the save operation. For example, the DIMM controller 230 can start transferring some of the data stored at the volatile memory component 210 to the non-volatile cross point array memory component 220 while the main power source properly operates. More details regarding the pre-save operation are described with respect to Fig. 5.[0028] Fig. 2B is an example of system architecture of the cross point array memory DIMM operation component 113 for a restore operation in accordance with someembodiments of the present disclosure. Similar to the system architecture of Fig. 2A, the cross point array memory DIMM operation component 113 includes the DIMM controller 230 connected to the volatile memory component 210 and the non-volatile cross point array memory component 220. A host system channel 271 of the DIMM controller 230 can provide a return of power signal or a restore command from the host system 120 to the DIMM controller 230. The host system channel 271 can be the same channel as the host system channel 251 of Fig. 2A. In response to detecting the return of power signal, the DIMM controller 230 can perform a restore operation to transfer data from the non-volatile cross point array memory component 220 to the volatile memory component 210. For example, the DIMM controller 230 can read data saved in the non-volatile cross point array memory component 220 via a non-volatile memory channel 273 and write the data to the volatile memory component 210 via a volatile memory channel 275. The volatile memory channel 275 can be the same channel as the volatile memory channel 255 of Fig. 2A. After completing the write operation on the volatile memory component 210, the DIMM controller 230 can reset the non-volatile cross point array memory component 220 via another non volatile memory channel 277. The non-volatile memory channel 273 and 277 can be the same channel as the non-volatile memory channel 257A and 257B of Fig. 2A. Further details regarding the restore operation are described in conjunction with Fig. 4.[0029] Fig. 3 is a flow diagram of an example method 300 to perform a save operation in accordance with some embodiments of the present disclosure. The method 300 can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the method 300 is performed by the cross point array memory DIMM operation component 113 of Fig. 1. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel.Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible.[0030] As shown in Fig. 3, at operation 310, a processing device receives an indication of a power loss. For example, the processing device or the DIMM controller 230 can detect a power loss signal generated from the host system 120 via the host system channel 251 of the cross point array memory DIMM operation component 113. The power loss signal can be at a low signal level (e.g., asserted to a value of‘0’). The power loss signal can indicate that the host system 120 intends to power down or can indicate a failure or an expected failure of a main power source that supplies energy to the memory sub-system 110 and/or the host system 120. The processing device can process the power loss signal as a save command from the host system 120 to initiate a save operation so that data stored at the volatile memory component 210 can be written to the non-volatile cross point array memory component 220.[0031] The processing device, at operation 320, determines a characteristic of the memory sub-system 110. For example, the processing device can determine a characteristic of a backup power source that provides a backup power to the memory sub-system 110 for the save operation to be performed by the cross point array memory DIMM operation component 113. The processing device can determine an energy level (i.e., how much energy is remaining) of the backup power source. In another example, the processing device can determine a characteristic of data stored in the volatile memory component 210 that is to be saved in the non-volatile cross point array memory component 220. In particular, the processing device can determine a size of the data stored in the volatile memory component 210 or an amount of the data to be transferred to the non-volatile cross point array memory component 220. Additionally, the processing device can identify a classification of data blocks of the data stored in the volatile memory component 210. Data blocks can be classified into high priority and low priority depending on stored data. For example, high priority data block can include user data (e.g., data generated by a user of the host system 120) or any other data critical to operation of the host system 120 (or more difficult to recover). Low priority data block can have non-user data such as metadata for the user data or any other data less detrimental to the operation of the host system 120 (or easier to recover). The processing device can classify data blocks based on metadata for the data blocks of the volatile memory component 210.[0032] At operation 330, the processing device, in response to receiving the indication of the power loss, determines a type of write operation for the non-volatile cross point array memory component 220 based on the characteristic of the memory sub-system 110. There can be multiple write operations available to be performed on the non-volatile cross point array memory component 220 of the cross point array memory DIMM operation component 113. Examples of such write operations include, but are not limited to, a pre-scan write operation and a force write operation.[0033] A pre-scan write operation can write data to the non-volatile cross point array memory component 220 based on a comparison between data blocks of the data from the volatile memory and data blocks previously stored at the non-volatile cross point array memory component 220. For example, such data blocks can store values that were previously written to the data blocks when prior data was written to the non-volatile cross point array memory component 220. The values that were previously written to the data blocks for the prior data can still be present at the non-volatile cross point array memory component 220 as an erase operation is not performed for the non-volatile cross point array memory component 220. In some embodiments, such data blocks can store the same value (e.g., zero) as they were previously reset as described in detail with respect to operation 430 of Fig. 4. The pre scan write operation can include a pre-read operation. The pre-read operation can first identify locations (or data blocks) in the non-volatile cross point array memory component 220 to be written and can read data that is currently stored at these locations of the non volatile cross point array memory component 220. Each data block of the data to be stored (e.g., data from the volatile memory component 210) would have a corresponding data block in the non-volatile cross point array memory component 220. The pre-scan write operation can also include a comparison operation followed by the pre-read operation. For example, if a particular data block at the non-volatile cross point array memory component 220 currently stores data that matches a corresponding data block of the data from the volatile memory component 210, then the processing device can determine not to write the data corresponding to that data block of the data from the volatile memory component 210 to the data block at the non-volatile cross point array memory component 220 as the data currently stored at the non-volatile cross point array memory component 220 matches the particular data block of the volatile memory component 210. Otherwise, if the particular data block at the non volatile cross point array memory component 220 currently stores data that does not match the corresponding data block of the data that is from the volatile memory component 210, then a write operation can be performed at the particular data block of the non-volatile cross point array memory component 220. For example, a voltage signal can be applied to the particular data block of the non-volatile cross point array memory component 220 to change a value of the data stored at the particular data block. Therefore, in the pre-scan write operation, the processing device writes data to data blocks of the non-volatile cross point array memory component 220 for the data blocks that include a data value that is different from a data value of a corresponding data block from the volatile memory component 210.[0034] On the other hand, a force write operation does not perform the pre-read operation and/or comparison operation. Instead, the force write operation can apply a voltage to every data block of the non-volatile cross point array memory component 220 that is to store data from the volatile memory component 210. For example, the force write operation can apply a voltage to a data block to set a value of‘O’ and can apply another voltage to another data block to set a value of‘ 1.’ Thus, the force write operation can write the entire data of the volatile memory component 210 to the non-volatile cross point array memory component 220. In some embodiments, the pre-scan write operation can be performed in less time and can take less power, On the other hand, the force write operation can take more time and more power. However, the force write operation can be considered to result in more reliable data storage, as each data block is being written to store data regardless of stored data, the respective data block becomes less prone to an error (e.g., an error caused by a drift in voltage threshold for storing data over time) . Therefore, the processing device can determine to use the force write operation for better reliability of data when there is sufficient backup power and/or time to complete the save operation. [0035] In some embodiments, the processing device can set as a default to write data from the volatile memory component 210 to the non-volatile cross point array memory component 220 using the pre-scan write operation for the save operation. In another embodiment, the processing device can selectively perform the force write operation instead of the pre-scan write operation in certain instances. For example, if there is more than enough power in the backup power source of the memory sub-system 110 to perform the save operation, the processing device can use the force write operation for better reliability of data. The processing device can determine whether the energy level of the backup power source satisfies an energy level threshold. The energy level threshold can be based on a particular energy level that is sufficient to write the data from the volatile memory component 210 to the non-volatile cross point array memory component 220 The particular energy level can also indicate an energy level of the backup power source that is sufficient for the processing device to perform the force write operation. If an energy level of the backup power source exceeds a particular energy level set as the threshold, the processing device can determine to use the force write operation. Otherwise, the processing device can perform the pre-scan write operation.[0036] As another example, the processing device can determine to use the force write operation when an amount of the data stored in the volatile memory component 210 to be written to the non-volatile cross point array memory component 220 does not exceed a particular size of data. Because the force write operation takes longer than the pre-scan write operation, the processing device can perform the force write operation when the force write operation can be completed for the amount of data stored in the 210. It is assumed that there is sufficient backup energy available for the save operation using the force write operation. Thus, the processing device can apply a data size threshold to control when to write data to the non-volatile cross point array memory component 220 by performing the force write operation.[0037] The processing device can also determine which write operation to use based on a classification of data blocks from the volatile memory component 210 For example, the processing device can store data blocks that are classified as high priority using the force write operation and other data blocks classified as low priority using the pre-scan write operation. The high priority data block can store user data (e.g., data generated by a user of the host system 120) or any other data critical to operation of the host system 120 (or more difficult to recover) and low priority data block can include non-user data such as metadata for the user data or any other data less detrimental to the operation of the host system 120 (or easier to recover).[0038] The processing device can further consider an amount of energy that can be supplied by the backup power source. If there is not sufficient backup power to write some data blocks using the force write operation, the processing device can determine to perform the pre-scan write operation for data blocks classified as low priority while determining to perform the force write operation for data blocks classified as high priority.[0039] At operation 340, the processing device retrieves data stored at the volatile memory component 210. For example, a read operation is performed on the volatile memory component 210. Then, at operation 350, the processing device writes the retrieved data to the non-volatile cross point array memory component 220 by using the determined type of write operation. For example, the pre-scan write operation or the force write operation can be performed as described above with respect to operation 330. The processing device can further keep a record of which data block from the volatile memory component 210 is written to which data block of the non-volatile cross point array memory component 220.[0040] In some embodiments, data can be written to the non-volatile cross point array memory component 220 sequentially when being written from the volatile memory component 210. For example, the save operation can save the data from the volatile memory component 210 in a contiguous or proximate memory cells or data block locations. As such, a disturbance mechanism can be reduced when reading the data from the non-volatile cross point array memory component 220 during a restore operation. For example, the data in the contiguous memory cells can be read in a streaming manner. In the same or alternative embodiments, multiple cursors can be used to write data from the volatile memory component 210 to the non-volatile cross point array memory component 220. For example, the volatile memory component 210 can include multiple channels of data where data can be retrieved from the volatile memory component 210. Each channel can be used to provide data to a particular cursor that writes data to a particular data block location of the cross point array memory.[0041] Fig. 4 is a flow diagram of an example method 400 to perform a restore operation in accordance with some embodiments of the present disclosure. The method 400 can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the method 400 is performed by the cross point array memory DIMM operation component 113 of Fig. 1. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel.Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible.[0042] At operation 410, a processing device receives an indication of a return of power. For example, the processing device can detect a return of power signal generated from the host system 120 via the host system channel 271 of the cross point array memory DIMM operation component 113. The return of power signal can be a high signal level (e.g., asserted to a value of‘ G). Also, the return of power signal can indicate a return or an expected return of power from the main power source for the memory sub-system 110 and/or the host system 120. The processing device can process the return of power signal as a restore command from the host system 120 to initiate a restore operation on data stored at the non-volatile cross point array memory component 220 to the volatile memory component 210.[0043] In response receiving the indication of the return of power, the processing device, at operation 420, retrieves data from the non-volatile cross point array memory component 220 and writes the data to the volatile memory component 210. At operation 430, the processing device resets the data blocks of the non-volatile cross point array memory component 220 that stored the data written to the volatile memory component 210. For example, the processing device can determine which data blocks in the non-volatile cross point array memory component 220 have been written for the prior save operation. The processing device can refer to a record (as mentioned with respect to operation 350) that maps data blocks of the volatile memory component 210 to data blocks in the non-volatile cross point array memory component 220 in the save operation. The processing device then can write the same data value (for example, data value of‘0’) to the data blocks of the non-volatile cross point array memory component 220 that stored the data written to the volatile memory component 210. In some embodiments, the processing device can perform force write operation on these data blocks of the non-volatile cross point array memory component 220. The processing device can apply a voltage to these data blocks of the non-volatile cross point array memory component 220 to set a value of‘0.’ The setting of the value of each of the memory cells to a value of‘O’ can result in better threshold voltage distribution of the memory cells when subsequent values are stored at the memory cells of the data blocks. In some embodiments, the processing device can use the pre-scan write operation on these data blocks of the non-volatile cross point array memory component 220 in order to reset the data values. The processing device can additionally reset other data blocks of the non-volatile cross point array memory component 220 that have not stored data written to the volatile memory component 210.[0044] After the non-volatile cross point array memory component 220 has been reset, the processing device can receive an indication of a power loss triggering the save operation described with respect to Fig. 3. In response, the processing device can perform the pre-scan write operation as a default to save data stored at the volatile memory component 210 to the non-volatile cross point array memory component 220.[0045] Fig. 5 is a flow diagram of an example method 500 to perform a pre-save operation in accordance with some embodiments of the present disclosure. The method 500 can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the method 500 is performed by the cross point array memory DIMM operation component 113 of Fig. 1. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel.Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible.[0046] A pre-save operation can be performed by writing some data stored at the volatile memory component 210 to the non-volatile cross point array memory component 220 during normal operation of the cross point array memory DIMM operation component 113. In this way, the time required for the save operation can be reduced as certain data stored at the volatile memory component 210 can be stored at the non-volatile cross point array memory component 220 before the save operation is performed. The pre-save operation can be performed after the non-volatile cross point array memory component 220 has been reset as described with respect to operation 430.[0047] At operation 510, a processing device determines a subset of data blocks at the volatile memory component 210 for the pre-saving operation. For example, the processing device can start performing the pre-save operation from data blocks that were stored earlier (i.e., older data) in the volatile memory component 210. In another example, the processing device can select data blocks that are least frequently accessed by the host system 120. [0048] The processing device, at operation 520, retrieves the determined subset of data blocks from the volatile memory component 210. The processing device then, at operation 530, writes the subset of data blocks to the non-volatile cross point array memory based on a first type of write operation. For example, the first type of write operation can be performed while a power is supplied to the memory sub-system 110 and/or the host system 120 from the main power source. In some embodiments, the first type of write operation can be the force write operation that is to be performed on the subset of data blocks of the volatile memory component 210 for a better reliability of the subset of data blocks when stored at the non volatile cross point array memory. In other embodiments, the first type of write operation can be the pre-scan write operation. The processing device can perform the write operation as a background process that does not interfere with operations of the host system 120. The processing device can also keep a record of which data block from the volatile memory component 210 has been transferred the non-volatile cross point array memory component 220.[0049] While some data blocks of the volatile memory component 210 are being stored at the non-volatile cross point array memory component 220, the processing device, at operation 540, receives an indication of a power loss similar to the power loss signal described with respect to operation 310 in Fig 3. In response, the processing device, at operation 550, saves the remaining data blocks from the volatile memory component 210 to the non-volatile cross point array memory component 220 based on a second type of write operation. The second type of write operation can be different than the first type of write operation. For example, the first type of write operation can be the force write operation and the second type of write operation can be the pre-scan write operation. The processing device can determine the remaining data blocks based on the record kept from the pre-save operation. The remaining data blocks of the volatile memory component 210 are the data blocks that have not yet been written to the non-volatile cross point array memory component 220 during the pre-save operation. In some embodiments, the processing device can perform the save operation at operation 550 as described with respect to Fig. 4.[0050] FIG. 6 illustrates an example machine of a computer system 600 within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, can be executed. In some embodiments, the computer system 600 can correspond to a host system (e.g., the host system 120 of FIG. 1) that includes, is coupled to, or utilizes a memory sub-system (e.g., the memory sub-system 110 of FIG. 1) or can be used to perform the operations of a controller (e.g., to execute an operating system to perform operations corresponding to the cross point array memory DIMM operation component 113 of Fig. 1). In alternative embodiments, the machine can be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, and/or the Internet. The machine can operate in the capacity of a server or a client machine in client-server network environment, as a peer machine in a peer-to-peer (or distributed) network environment, or as a server or a client machine in a cloud computing infrastructure or environment.[0051] The machine can be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, a switch or bridge, or any machine capable of executing a set of instructions(sequential or otherwise) that specify actions to be taken by that machine. Further, while a single machine is illustrated, the term“machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.[0052] The example computer system 600 includes a processing device 602, a main memory 604 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), a static memory 606 (e.g., flash memory, static random access memory (SRAM), etc.), and a data storage system 618, which communicate with each other via a bus 630.[0053] Processing device 602 represents one or more general-purpose processing devices such as a microprocessor, a central processing unit, or the like. More particularly, the processing device can be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device 602 can also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device 602 is configured to execute instructions 626 for performing the operations and steps discussed herein. The computer system 600 can further include a network interface device 608 to communicate over the network 620.[0054] The data storage system 618 can include a machine-readable storage medium 624 (also known as a computer-readable medium) on which is stored one or more sets of instructions 626 or software embodying any one or more of the methodologies or functions described herein. The instructions 626 can also reside, completely or at least partially, within the main memory 604 and/or within the processing device 602 during execution thereof by the computer system 600, the main memory 604 and the processing device 602 also constituting machine-readable storage media. The machine-readable storage medium 624, data storage system 618, and/or main memory 604 can correspond to the memory sub-system 110 of FIG. 1.[0055] In one embodiment, the instructions 626 include instructions to implement functionality corresponding to a cross point array DIMM operation component (e.g., the cross point array DIMM operation component 113 of Fig. 1). While the machine-readable storage medium 624 is shown in an example embodiment to be a single medium, the term“machine- readable storage medium” should be taken to include a single medium or multiple media that store the one or more sets of instructions. The term“machine-readable storage medium” shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure. The term“machine-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media.[0056] Some portions of the preceding detailed descriptions have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the ways used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.[0057] It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. The present disclosure can refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage systems. [0058] The present disclosure also relates to an apparatus for performing the operations herein. This apparatus can be specially constructed for the intended purposes, or it can include a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program can be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus.[0059] The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems can be used with programs in accordance with the teachings herein, or it can prove convenient to construct a more specialized apparatus to perform the method. The structure for a variety of these systems will appear as set forth in the description below. In addition, the present disclosure is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages can be used to implement the teachings of the disclosure as described herein.[0060] The present disclosure can be provided as a computer program product, or software, that can include a machine-readable medium having stored thereon instructions, which can be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g., a computer). In someembodiments, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium such as a read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory components, etc.[0061] In the foregoing specification, embodiments of the disclosure have been described with reference to specific example embodiments thereof. It will be evident that various modifications can be made thereto without departing from the broader spirit and scope of embodiments of the disclosure as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense. |
An input device may operate with a variety of different host processor-based systems running a variety of different applications by providing a translation module which translates input commands in one format to a format compatible with one or more applications that may run on a given processor-based system. A table may be provided, for example, in software, which enables a variety of different input device formats to be converted into a variety of formats utilized by an application. In this way, contention between an application and an input device may be resolved. |
What is claimed is: 1. A method comprising:determining which of two applications has focus, each of those applications accepting commands in a format different than the format accepted by the other of the two applications; and translating a command received in a format incompatible with the application having focus into a format compatible with the application having focus. 2. The method of claim 1 including converting a numerical command in a first format to a second format in terms of keystroke combinations.3. The method of claim 1 including translating the command externally to the application having focus and externally to the other application.4. The method of claim 1 including receiving the command from a remote control unit.5. The method of claim 4 including converting the command received from the remote control unit to a format suitable for navigating in a web browser.6. An article comprising a medium storing instructions that, if executed, enable a processor-based system to perform the steps of:determining which of two applications has focus, each of those applications accepting commands in a format different than the format accepted by the other of the two applications; and translating a command received in a format incompatible with the application have focus into a format compatible with the application having focus. 7. The article of claim 6 further storing instructions that, if executed, enable a processor-based system to perform the step of converting a numerical command in a first format to a second format in terms of keystroke combinations.8. The article of claim 6 further storing instructions that, if executed, enable a processor-based system to perform the step of translating the command externally to the application having focus and externally to the other application.9. The article of claim 6 further storing instructions that, if executed, enable the processor-based system to perform the step of receiving the command from a remote control unit.10. The article of claim 9 further storing instructions that, if executed, enable the processor-based system to perform the step of converting the command received from the remote control unit to a format suitable for navigating in a web browser. |
BACKGROUNDThis invention relates generally to processing of input commands by processor-based systems.A well defined protocol exists for commands from input devices to processor-based systems. For example, the Universal Serial Bus (USB) Device Class Definition for Human Interface Devices (HID), Firmware Specification, Version 1.1, dated Apr. 7, 1999 (available at www.usb.org) sets forth detailed systems for interfacing input devices with processor-based systems. However, a number of circumstances may arise which render such systems inapplicable. For example, a so-called legacy input device may be provided which does not provide signals in the proper format recognized under a given specification. Alternatively, an application running on a processor-based system may be a legacy application which is not adapted to recognize the particular commands provided by a given input device.Thus, in a number of circumstances, there may be a mismatch between the command set provided by the input device and the command set recognized by a given application. In such cases, a given input device may not be useful with a given processor-based system or a given application may not be useful with a given processor-based system or a given input device.Thus, there is a continuing need for a way to enable more input devices to work with more applications run on processor-based systems.SUMMARYIn accordance with one aspect, a method includes receiving on a processor-based system a command from an input device in a first format. The command is translated to a second format compatible with an application on the processor-based system.Other aspects are set forth in the accompanying detailed description and claims.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a schematic depiction of one embodiment of the present invention;FIG. 2 is a flow diagram for software for implementing one embodiment of the present invention; andFIG. 3 is a hardware block diagram of one embodiment of the present invention.DETAILED DESCRIPTIONAn input device 10, shown in FIG. 1, may interface with a host processor-based system 12 through a link 14. Examples of input devices 10 include keyboards, pointing devices, front panel controls, controls on processor-based devices such as telephones, video cassette recorders, games and simulation devices, sports equipment and appliances, as examples. The link 14 may be a cable such as a USB cable or a wireless link such as an infrared or radio frequency link as examples. The processor-based system 12 may be a desktop computer, a laptop computer, an appliance, a set top box, or any of a variety of other processor-based devices.The input device 10 may provide a signal to the host processor-based system 12 in a first format. The host processor-based system 12 may include applications 26 which process input commands in a second format. For example, the input device 10 may operate in accordance with the HID Firmware Specification, but the application 26 may be a legacy or non-compliant application. Conversely, the application 26 may process commands in accordance with the HID Firmware Specification but the input device 10 may be a legacy device which provides numerical commands non-compliant with that specification.While an example is provided of input devices 10 that are compliant or not compliant with the HID Firmware Specification, the present invention is applicable in a variety of situations where an input device provides commands in one format and an application processes commands in a different format. Similarly, while in one example, the link 14 is a Universal Serial Bus Specification compliant cable, the present invention is not in any way limited to USB embodiments.The input device 10 may provide a signal over the link 14 that is received by interface or receiving hardware 16. The receiving hardware 16 passes a received command up an input stack, as indicated by the arrow 32. A translation module 22, which may be implemented in software or hardware, is responsible for translating the input command from the first format to the second format.The translated command is then made available to the application 26 as indicated by the arrow 36. The translation module 22 may use a database or tables 24 to translate from one format to another. The available formats may be numerous and the conversions between these formats may be equally numerous. Therefore, tables 24 may provide information about how to convert from one a variety of formats to one of another variety of formats.In one embodiment of the present invention, the input device 10 is a remote control unit and the host processor-based system 12 may be a set top processor-based system. In such case, the link 14 may be a bidirectional infrared link and the receiving hardware 16 may be an infrared interface device. In this case, a legacy input device 10 may provide numerical commands while HID Firmware Specification compliant codes may be used by the application 26.The ability to dynamically change the RCU commands to keystroke combinations may be useful, for example, when modifying an application's behavior. For example, an application may be designed to run full screen as the sole application. However, there may be times when more than one application may be active on a given display. In order to have these applications coexist when they share the screen as well as to allow these applications to be controlled by the same RCU, functionality can be limited during the times that the applications share focus. In order to facilitate this limiting of functionality, inputs may be masked, allowing an application's behavior to be modified without having to change the state of the application. For example, the input commands may be masked in the translation module 22 at various times.In addition, the RCU functionality may be remapped on different applications without modifying a previously functional application. For example, a web browser may have accelerator keys for its navigation functions. The client application may support the web browser when the web browser takes focus, by modifying the RCU commands to keystroke combinations that reflect the accelerator keys on the web browser's user interface. Thus, differences between input device and application can be handled externally to the application and the input device.Turing now to FIG. 2, in an embodiment in which the translation module 22 is implemented in software 28, input device specific commands may be received as indicated in block 30. In one embodiment of the present invention, legacy and numerical commands from an input device 10 may be received by the host processor-based system 12. These commands are then provided, as indicated in block 32, to the translation module 22. The translation module 22 maps the commands received from the input device 10 to keystrokes in accordance with the protocol utilized by a particular application, as indicated in block 34.The translated commands may then be provided to the application 26 as indicated in block 36. The application 26 then processes the commands, as indicated in block 38, without modification of the application.Referring next to FIG. 3, a host processor-based system 12 may include a wireless link 14 with a remote control unit acting as the input device 10. The system 12 may include a processor 40 coupled to an interface 42 such as a bridge or a chipset. The interface 42 may, for example, couple a system memory 44 and a bus 46. The bus 46 in turn may be coupled to another interface 48 which also may be a bridge or part of a chipset. The interface 48 may in turn be coupled to a hard disk drive 50 or other storage medium, such as floppy drive, a compact disk drive, a digital versatile disk drive, a flash memory or the like. The module 22, if implemented in software, the tables 24, the software 28 and the application 26 may be stored on the hard disk drive 50 in one embodiment of the present invention.A second bus 52 may be coupled to an airwave interface operating as the receiving hardware 16. The hardware 16 may receive signals from the input device 10 and may convert those signals into a form compatible with the processor-based system 12.The input device 10 may be conventional in many respects and may include a wireless interface 58 which is coupled to a key code generating controller 60. The controller 60 in turn may be coupled to a storage 62 that may store operating protocols for the input device 10. In one embodiment of the present invention, the input device 10 may be battery powered.While the present invention has been described with respect to a limited number of embodiments, those skilled in the art will appreciate numerous modifications and variations therefrom. It is intended that the appended claims cover all such modifications and variations as fall within the true spirit and scope of this present invention. |
Tunneling field effect transistors (TFETs) for CMOS architectures and approaches to fabricating N-type and P-type TFETs are described. For example, a tunneling field effect transistor (TFET) includes a homojunction active region disposed above a substrate. The homojunction active region includes a relaxed Ge or GeSn body having an undoped channel region therein. The homojunction active region also includes doped source and drain regions disposed in the relaxed Ge or GeSn body, on either side of the channel region. The TFET also includes a gate stack disposed on the channel region, between the source and drain regions. The gate stack includes a gate dielectric portion and gate electrode portion. |
CLAIMS What is claimed is: 1. A tunneling field effect transistor (TFET), comprising: a homojunction active region disposed above a substrate, the homojunction active region comprising: a relaxed Ge or GeSn body having an undoped channel region therein; and doped source and drain regions disposed in the relaxed Ge or GeSn body, on either side of the channel region; and a gate stack disposed on the channel region, between the source and drain regions, the gate stack comprising a gate dielectric portion and gate electrode portion. 2. The TFET of claim 1 , wherein the relaxed Ge or GeSn body is a direct band gap body and has a thickness of, or less than, approximately 5 nanometers, and wherein the TFET is a finfet, frigate or square nanowire-based device. 3. A tunneling field effect transistor (TFET), comprising: a hetero-junction active region disposed above a substrate, the hetero-junction active region comprising: a relaxed body comprising a Ge or GeSn portion and a lattice matched Group ΙΠ-V material portion and having an undoped channel region in both the Ge or GeSn portion and the lattice matched Group ΙΠ-V material portion; a doped source region disposed in the Ge or GeSn portion of the relaxed body, on a first side of the channel region; and a doped drain region disposed in the Group III-V material portion of the relaxed body, on a second side of the channel region; and a gate stack disposed on the channel region, between the source and drain regions, the gate stack comprising a gate dielectric portion and gate electrode portion. 4. The TFET of claim 3, wherein the Ge or GeSn portion of the relaxed body is a Ge portion, and the lattice matched Group ΠΙ-V material portion is a GaAs or Gao.5Ino.5P portion. 5. The TFET of claim 3, wherein the relaxed body is a direct band gap body and has a thickness of, or less than, approximately 5 nanometers, and wherein the TFET is a finfet, frigate or square nanowire-based device. 6. A tunneling field effect transistor (TFET), comprising: a homojunction active region disposed above a relaxed substrate, the homojunction active region comprising: a biaxially tensile strained Ge or Gei_ ySn ybody having an undoped channel region therein; and doped source and drain regions disposed in the biaxially tensile strained Ge or Gei_ ySn ybody, on either side of the channel region; and a gate stack disposed on the channel region, between the source and drain regions, the gate stack comprising a gate dielectric portion and gate electrode portion. 7. The TFET of claim 6, wherein the relaxed substrate is a Gei_ xSn x(x > y) or In xGai_ xAs substrate. 8. The TFET of claim 6, wherein the biaxially tensile strained Ge or Gei_ ySn ybody is a direct band gap body and has a thickness of, or less than, approximately 5 nanometers. 9. The TFET of claim 6, wherein the TFET is a planar, finfet, trigate or square nano wire-based device. 10. The TFET of claim 6, wherein the TFET is a finfet or trigate device, and strained Ge or Gei_ ySn ybody has uniaxial tensile stress along a crystal orientation of <100>, <010> or <001>. 11. A tunneling field effect transistor (TFET), comprising: a hetero-junction active region disposed above a substrate, the hetero-junction active region comprising: a vertical nanowire comprising a lower Ge portion and an upper GeSn portion and having an undoped channel region in only the GeSn portion; a doped source region disposed in the Ge portion of the vertical nanowire, below the channel region; and a doped drain region disposed in the GeSn portion of the vertical nanowire, above the channel region; and a gate stack disposed surrounding the channel region, between the source and drain regions, the gate stack comprising a gate dielectric portion and gate electrode portion. 12. The TFET of claim 11 , wherein the lower Ge portion of the vertical nanowire is disposed on a virtual substrate portion of the substrate, and wherein the virtual substrate is a relaxed InGaAs or relaxed GeSn virtual substrate. 13. The TFET of claim 11 , wherein the lower Ge portion of the vertical nanowire is disposed on a compressively strained GeSn layer. 14. The TFET of claim 11 , wherein the lower Ge portion of the vertical nanowire is disposed on a larger Ge region disposed on a virtual substrate portion of the substrate, and wherein the virtual substrate is a relaxed GeSn virtual substrate. 15. The TFET of claim 14, wherein the GeSn virtual substrate comprises approximately 14% Sn, and wherein the upper GeSn portion of the vertical nanowire is compressively strained and comprises approximately 28% Sn. 16. The TFET of claim 11 , wherein the lower Ge portion has tensile strain. 17. The TFET of claim 16, wherein, from a top-down perspective, the vertical nanowire has an approximately square geometry, and wherein the tensile strain is a biaxial tensile strain. 18. The TFET of claim 11 , wherein the lower Ge portion has a vertical dimension approximately in the range of 2 - 4 nanometers. 19. A tunneling field effect transistor (TFET), comprising: a hetero-junction active region disposed above a substrate, the hetero-junction active region comprising: a vertical nanowire comprising a lower tensile strained Gei_ ySn yportion and an upper Gei_ xSn xportion and having an undoped channel region in only the Gei_ xSn xportion, where x > y; a doped source region disposed in the Gei_ ySn yportion of the vertical nanowire, below the channel region; and a doped drain region disposed in the Gei_ xSn xportion of the vertical nanowire, above the channel region; and a gate stack disposed surrounding the channel region, between the source and drain regions, the gate stack comprising a gate dielectric portion and gate electrode portion. 20. The TFET of claim 19, wherein the lower tensile strained Gei_ ySn yportion of the vertical nanowire is disposed on a virtual substrate portion of the substrate, and wherein the virtual substrate is a relaxed InGaAs or relaxed GeSn virtual substrate. |
TUNNELING FIELD EFFECT TRANSISTORS (TFETS) FOR CMOS ARCHITECTURES AND APPROACHES TO FABRICATING N-TYPE AND P-TYPE TFETS TECHNICAL FIELD Embodiments of the invention are in the field of semiconductor devices and, in particular, tunneling field effect transistors (TFETs) for CMOS architectures and approaches to fabricating N-type and P-type TFETs. BACKGROUND For the past several decades, the scaling of features in integrated circuits has been a driving force behind an ever-growing semiconductor industry. Scaling to smaller and smaller features enables increased densities of functional units on the limited real estate of semiconductor chips. For example, shrinking transistor size allows for the incorporation of an increased number of memory devices on a chip, leading to the fabrication of products with increased capacity. The drive for ever-more capacity, however, is not without issue. The necessity to optimize the performance of each device becomes increasingly significant. In the manufacture of integrated circuit devices, multi-gate transistors, such as tri-gate transistors, have become more prevalent as device dimensions continue to scale down. In conventional processes, tri-gate transistors are generally fabricated on either bulk silicon substrates or silicon-on-insulator substrates. In some instances, bulk silicon substrates are preferred due to their lower cost and because they enable a less complicated tri-gate fabrication process. On bulk silicon substrates, however, the fabrication process for tri-gate transistors often encounters problems when aligning the bottom of the metal gate electrode with the source and drain extension tips at the bottom of the transistor body (i.e., the "fin"). When the tri-gate transistor is formed on a bulk substrate, proper alignment is needed for optimal gate control and to reduce short-channel effects. For instance, if the source and drain extension tips are deeper than the metal gate electrode, punch-through may occur. Alternately, if the metal gate electrode is deeper than the source and drain extension tips, the result may be an unwanted gate capacitance parasitics. Many different techniques have been attempted to reduce junction leakage of transistors. However, significant improvements are still needed in the area of junction leakage suppression. BRIEF DESCRIPTION OF THE DRAWINGS Figure 1 illustrates angled views of (a) a portion of a homojunction TFET device having an unstrained Ge or GeSn narrow body, in accordance with an embodiment of the present invention, and (c) a portion of a hetero-junction TFET device having an unstrained narrow source/channel junction, in accordance with an embodiment of the present invention. In (b), the leading band edges are shown for a relaxed 5 nm Ge double gate device, corresponding to (a). The leading edges for the band alignment for the structure of (c) are shown in (d). Figure 2A illustrates an angled view of a portion of a planar biaxial tensile strained Ge or GeSn homojunction TFET device, in accordance with an embodiment of the present invention. Figure 2B illustrates an angled and partially cross-sectioned view of a portion of a suspended nanowire or nanoribbon Ge or GeSn homojunction based TFET device, in accordance with an embodiment of the present invention. Figure 2C illustrates an angled view of a portion of a tri-gate or finfet Ge homojunction based TFET device, in accordance with an embodiment of the present invention. Figure 3A illustrates an angled view of a portion of a vertical TFET device having a tensile strained Ge region, in accordance with an embodiment of the present invention. Figure 3B illustrates an angled view of a portion of another vertical TFET device having a tensile strained Ge region, in accordance with an embodiment of the present invention. Figure 3C illustrates an angled view of a portion of yet another vertical TFET device having a tensile strained Ge region, in accordance with an embodiment of the present invention. Figure 4 illustrates an angled view of a portion of a vertical TFET device having a tensile strained Gei_ ySn yregion, in accordance with an embodiment of the present invention. Figure 5 is a band energy diagram 500 for bulk relaxed Ge at a temperature of approximately 300K, in accordance with an embodiment of the present invention. Figure 6 is a Table of electron masses along different confinement orientations for a finfet device for four L- valleys, in accordance with an embodiment of the present invention. Figure 7 is a plot of simulated drain current (ID) as a function of gate voltage (VG) for N- and P-type unstrained Ge devices, in accordance with an embodiment of the present invention. Figure 8 is a plot of simulated energy (meV) as a function of biaxial stress (MPa) bulk Ge devices, in accordance with an embodiment of the present invention. Figure 9A is a plot of simulated drain current (ID) as a function of gate voltage (VG) for N- and P-type strained and unstrained Ge devices, in accordance with an embodiment of the present invention. Figure 9B is a plot of simulated drain current (ID) as a function of gate voltage (VG) in P- type strained Ge or ΙΠ-V material devices, in accordance with an embodiment of the present invention. Figure 1 OA is a plot 1000A showing the direct and indirect band gap in GeSn versus Sn content calculated using the Jaros' band offset theory, in accordance with an embodiment of the present invention. Figure 1 OB is a plot 1000B depicting the transition for a Gei_ x_ ySi xSn yternary alloy, in accordance with an embodiment of the present invention. Figure 11 A is a plot depicting stress simulation of the structure shown in Figure 3A for varying wire dimensions, in accordance with an embodiment of the present invention. Figure 1 IB is a plot depicting stress simulation of the structure shown in Figure 3B, in accordance with an embodiment of the present invention. Figure 11C is a plot depicting stress simulation of the structure shown in Figure 3C, in accordance with an embodiment of the present invention. Figure 12 illustrates a computing device in accordance with one implementation of the invention. DESCRIPTION OF THE EMBODIMENTS Tunneling field effect transistors (TFETs) for CMOS architectures and approaches to fabricating N-type and P-type TFETs are described. In the following description, numerous specific details are set forth, such as specific integration and material regimes, in order to provide a thorough understanding of embodiments of the present invention. It will be apparent to one skilled in the art that embodiments of the present invention may be practiced without these specific details. In other instances, well-known features, such as integrated circuit design layouts, are not described in detail in order to not unnecessarily obscure embodiments of the present invention. Furthermore, it is to be understood that the various embodiments shown in the Figures are illustrative representations and are not necessarily drawn to scale. One or more embodiments described herein target approaches to, and the resulting devices from, using an indirect bandgap to direct bandgap transition for complementary N-type and P-type TFET devices. In more specific embodiments, the TFET devices are fabricated from Group IV materials. The devices may have applications in logic architectures, and in lower power device architectures. One or more embodiments are directed to achieving high performance N-type and P-type TFET devices by using indirect to direct bandgap transitions in group IV materials. Methods and structures to engineer such devices are described herein. In one embodiment, TFETs are used to achieve steeper subthreshold slope (SS) versus a corresponding metal oxide semiconductor field effect transistor (MOSFET) with a thermal limit of approximately 60mV/decade. Generally, embodiments described herein may be suitable for high performance or scaled transistors for logic devices having low power applications. To provide a background context, due to the presence of direct band gaps and a wide variety of hetero-structure band alignments, group ΙΠ-V material based TFETs should offer high drive current and low SS. A SS less than 60mV/decade has been achieved for a group III-V material hetero-structure pocket N-type TFET. With further device optimization of equivalent oxide thickness (EOT), body scaling, and barrier engineering, the group ΙΠ-V material N-type TFET is expected to outperform group ΙΠ-V material MOSFETs at a low target VCC, e.g., a VCC of approximately 0.3V. However, the low density of conduction band states in group ΠΙ-V materials may present a fundamental limitation on achieving both a low SS and high on current (I ON) in P-type TFETs based on group ΙΠ-V materials. Furthermore, the I ONcurrent in TFETs fabricated in or from technologically important group IV materials, such as silicon (Si), germanium (Ge), or silicon germanium (SiGe), may be limited by a larger bandgap (e.g., 1.12 eV in Si) and/or a low indirect band gap tunneling current. In Si and Ge, the top valence bands are at the gamma point, while the lowest conduction bands are at the delta point in Si and L point in Ge. The tunneling between the conduction band and the valence band at the source/channel junction is enabled by a phonon-assisted two-step process. The process typically has a low probability which may lead to a low I ONfor TFETs based on indirect bandgap materials. For example, in the best performing Si/SiGe hetero- structure TFET the experimentally achieved I ONis approximately 40 nA/micron at IV gate overdrive, which is approximately 25 times lower than the above described I ONfor group ΠΙ-V material devices at 0.3V gate overdrive. A corresponding high I 0Nfor Si, Ge, or SiGe based TFETs has not yet been achieved. Accordingly, one or more embodiments described herein target approaches to fabricating high performance N-type and P-type TFETs with low SS and high I ONin the same material system. In an embodiment, band engineering of a band structure of group IV materials, and their alloys, is used to achieve an indirect bandgap-to-direct bandgap transition for enabling N-type and P-type TFET devices in the same material. The group IV materials do not suffer from the low conduction density of states. Furthermore, with the engineered direct band gap, a high I ONand low SS can be achieved in both N-type and P-type TFETs fabricated in a same material. In specific embodiments, both unstrained and strained Ge-based or GeSn-based N-type and P-type TFETs are described. In a first aspect, one or more embodiments described herein are directed to methods of achieving an indirect-direct bandgap transition for use in TFETs. For example, in one embodiment, wafer orientation and conduction band non-parabolicity effect is used to increase the conduction band gamma valley mass under confinement in a thin body fin field effect transistor (finfet) or nanowire Ge or germanium tin (GeSn) TFET. Such a device provides a conduction band gamma valley energy as the lowest conduction band edge to achieve the direct bandgap. In another embodiment, tensile strain in Ge, GeSn, or silicon germanium tin (SiGeSn) is used to achieve a direct bandgap. In another embodiment, alloying of Ge with Sn in relaxed GeSn or SiGeSn is used to achieve a direct bandgap. Specific embodiments of the above approaches are described below in association with Figures 5-11. In a second aspect, one or more embodiments described herein are directed to structures for TFET devices which utilize a direct bandgap transition. For example, in one embodiment, a device is based on an unstrained Ge or GeSn narrow body homojunction TFET or an unstrained Ge or GeSn narrow source/channel junction hetero-structure TFET using finfet or nanowire/nanoribbon device geometries. The confinement leads to the indirect-to-direct bandgap transition at or below approximately 5 nm body thickness in finfet, or in a wide rectangular nanoribbon or a square nanowire. These devices are fabricated to have (100), (010) or (001) orientations at the device surfaces. The direct bandgap material is disposed either throughout the device, or in the source/channel junction of the device. In the drain/body of the hetero-structure device, a lattice-matched direct wide bandgap material is used to minimize the off state current (IOFF) of the device. In another embodiment, a finfet or nanowire is based on an unstrained Gei_ xSn xhomojunction TFET with the Sn content x > 6%, although the requirement to have a narrow body to achieve the direct band gap may be relaxed in this case. Examples of the immediately above described devices are illustrated in Figure 1, a description of which follows. Generally, Figure 1 illustrates angled views of (a) a portion 100 A of a TFET device having an unstrained Ge or GeSn narrow body, e.g., at or less than approximately 5 nm dimension finfet or square nanowire/nanoribbon homojunction, and (c) a portion lOOC of a TFET device having an unstrained narrow source/channel junction, e.g., at or less than approximately 5 nm dimension. The direct bandgap material is either throughout the device in (a), or in the source/channel junction in (c). In (b) of Figure 1 , the leading band edges are shown for a relaxed 5 nm Ge double gate device. To achieve the direct bandgap at the largest minimum body dimension, the confinement direction in the corresponding finfet is <100> (or <010>, or <001>), and the surface orientations are (100) (or (010), or (001)) in a wire/ribbon based device. In the hetero-structure in (c), i.e., portion lOOC, the lattice-matched direct wide bandgap material is used to minimize the I 0FF in the drain/body of the device. In an exemplary embodiment, an example choice for a hetero-structure in (c) is a Ge narrow source/channel junction and lattice matched relaxed ΙΠ-V material GaAs or Gao.5Ino.5P in the body and in the drain. The leading edges for the band alignment for the structure of (c) are shown in (d). More specifically, referring again to Figure 1, the portion 100 A of a TFET device includes an undoped and unstrained Ge or GeSn narrow body 102 having a thickness 104. Source (Na/Nd) 106 and drain (Nd/Na) 108 regions are doped regions formed in the same Ge or GeSn material. The portion 100A may be used to fabricate a narrow body homojunction Ge or GeSn N-type or P-type TFET homojunction-based devices. In (b) the band energy (eV) as a function of the distance x along the structure 100 A is provided for a device with 5 nm body dimension. The portion lOOC of a TFET device includes an undoped and unstrained Ge or GeSn narrow body first portion 152 having a thickness 154. A lattice matched narrow body second portion 153 is also included and may be fabricated from a lattice matched Group ΠΙ-V material as described above. Source (Na/Nd) 156 region is formed as a doped region of the Ge or GeSn material of 152 having a thickness 157, while drain (Nd/Na) 158 region is formed as a doped region of the lattice matched ΠΙ-V material. The portion lOOC may be used to fabricate a narrow source/channel junction for Ge or GeSn N-type or P-type TFET hetero-junction-based devices. In (d) the band energy (eV) as a function of the distance x along the structure lOOC is provided for a device with 5 nm body dimension. In another example of structures for TFET devices which utilize a direct bandgap transition, in an embodiment, a TFET device is based on a planar biaxial tensile strained Ge homojunction structure, with Ge strain obtained from a Ge film grown pseudomorphically on a relaxed substrate having a larger lattice constant. In a specific embodiment, possible selections for the substrate include, but are not limited to, Gei_ xSn xand In xGai_ xAs. For example, the growth of biaxial tensile Ge and GeSn on In xGai_ xAs relaxed buffers layers may provide a suitable approach. However, in an embodiment, approximately 12.5% of Sn or approximately 30% of indium (In) is used to fabricate the direct bandgap material in an approximately 5 nm body dimension Ge-based TFET. In another embodiment, a planar biaxial tensile strained Gei_ ySn ywith less than approximately 6% of Sn is used in a homojunction TFET device, with Gei_ ySn ystrain obtained from a Gei_ ySn yfilm grown pseudomorphically on a relaxed substrate having a larger lattice constant. In a specific such embodiment, possibilities for the substrate include, but are not limited to, Gei_ xSn xand In xGai_ xAs. Examples of the immediately above described devices are illustrated in Figure 2A, a description of which follows. Generally, Figure 2A illustrates an angled view of a portion 200A of a planar biaxial tensile strained Ge or GeSn homojunction TFET device, in accordance with an embodiment of the present invention. In one embodiment, strain for the device is derived from a layer grown pseudomorphically on a relaxed substrate with a larger lattice constant. Possibilities for the substrate include, but are not limited to, Gei_ xSn xand In xGai_ xAs having a larger lattice constants than a corresponding active layer. More specifically, referring again to Figure 2A, the portion 200A of a TFET device includes an active layer 204A disposed on a substrate 202A. The substrate 202A is a relaxed buffer having a lattice constant greater than the lattice constant of the active layer 204 A. An undoped body 206 A having a thickness 207 A is disposed between doped source (Na/Nd) region 208 A and doped drain (Nd/Na) region 210A. A gate electrode 212A and gate dielectric 214A stack is formed above the undoped body 206A. In an embodiment, the structure 200A is used to fabricate a planar Ge or GeSn N-type or P-Type TFET having biaxial tensile stress. In another example of structures for TFET devices which utilize a direct bandgap transition, in an embodiment, a TFET device is based on a suspended nanowire or nanoribbon Ge homojunction. In a specific embodiment, a TFET device is undercut in a channel region of a planar biaxial tensile strained Ge film, with Ge strain obtained from Ge film grown pseudomorphically on a relaxed substrate having a larger lattice constant. Possibilities for the substrate include, but are not limited to, Gei_ xSn xor In xGai_ xAs. In a specific embodiment, a concentration of approximately 12.5% Sn or 30% In is used to produce a direct bandgap material for an approximately 5 nm body dimension Ge TFET. Generally, Figure 2B illustrates an angled and partially cross-sectioned view of a portion 200B of a suspended nanowire or nanoribbon Ge homojunction based TFET device, in accordance with an embodiment of the present invention. In one embodiment, the device is fabricated by undercutting in a channel region of a planar biaxial tensile strained Ge film. Ge strain may be obtained from a Ge film grown pseudomorphically on a relaxed substrate having a larger lattice constant. Possibilities for the substrate include, but are not limited to, Gei_ xSn xor In xGai_ xAs. In an embodiment, such a structure enables a direct bandgap due to a combined effect of confinement and stress. More specifically, referring again to Figure 2B, the portion 200B of a TFET device includes an active layer 204B disposed on a substrate 202B. The substrate 202B is a relaxed wide buffer having a lattice constant greater than the lattice constant of the active layer 204B. The active layer 204B is undercut in region 250B to provide an undoped body 206B having a thickness 207B disposed between doped source (Na/Nd) region 208B and doped drain (Nd/Na) region 210B. A gate electrode 212B and gate dielectric 214B stack is formed to wrap around the undoped body 206B. In an embodiment, the structure 200B is used to fabricate a nanowire or nanoribbon Ge or GeSn N-type or P-Type TFET having biaxial tensile stress. In another example of structures for TFET devices which utilize a direct bandgap transition, in an embodiment, a TFET device is based on a tri-gate or finfet Ge homojunction. In one embodiment, the device is fabricated by cutting a layer region into a fin in a channel region of a planar biaxial tensile strained Ge film. In a specific embodiment, Ge strain is obtained from a Ge film grown pseudomorphically on a relaxed substrate having a larger lattice constant. Possibilities for the substrate include, but are not limited to, Gei_ xSn xor In xGai_ xAs. Such a structure may enable a direct bandgap due to a combined effect of confinement and uniaxial tensile stress. In one embodiment, the uniaxial tensile stress and the transport directions are along one of the principal crystal orientations of <100>, <010>, or <001>. Generally, Figure 2C illustrates an angled view of a portion 200C of a tri-gate or finfet Ge homojunction based TFET device, in accordance with an embodiment of the present invention. In one embodiment, the device is fabricated by cutting a layer region into a fin a the channel region of planar biaxial tensile strained Ge film. Ge strain may be derived from a Ge film grown pseudomorphically on a relaxed substrate having a larger lattice constant. In an embodiment, possible choices for the substrate include, but are not limited to, Gei_ xSn xor In xGai_ xAs. More specifically, referring again to Figure 2C, the portion 200C of a TFET device includes a tensile strained active layer 204C disposed on a substrate 202C. The substrate 202C is a relaxed wide buffer having a lattice constant greater than the lattice constant of the active layer 204C. The active layer 204C is patterned to have a fin geometry 250C to provide an undoped body 206C having a thickness 207C disposed between doped source (Na/Nd) region 208C and doped drain (Nd/Na) region 210C. A gate electrode 212C and gate dielectric 214C stack is formed on the top and exposed sides of the undoped body 206C. In an embodiment, the structure 200C is used to fabricate a trigate or finfet Ge or GeSn based N-type or P-Type TFET having uniaxial tensile stress. In a specific embodiment, the device has a transport direction along a crystal orientations of <100>, <010>, or <001>. In another example of structures for TFET devices which utilize a direct band gap transition, in an embodiment, a TFET device is based on a vertical thin body with a biaxial tensile strained Ge region used as a source, or a source/channel junction. In one such embodiment, for dimension considerations, the Ge region has a vertical dimension approximately in the range of 2 - 4 nanometers. There are a number of possible approaches to achieving a high tensile strain for fabricating a direct gap source region with Ge, examples of which are described below in association with Figures 3A-3C. Although not necessarily depicted, other options for fabricating strained Ge source materials include, but are not limited to, embedding the Ge inside a relaxed GeSn or tensile strained SiGe structure. In a first example, Figure 3A illustrates an angled view of a portion 300A of a vertical TFET device having a tensile strained Ge region, in accordance with an embodiment of the present invention. Referring to Figure 3A, the TFET device is formed above a virtual substrate 302A formed above a substrate 301 A. A germanium source region 304A is included and has tensile strain. Above the germanium source region 304A is a channel region 306A and drain region 308A. In one embodiment, the channel region 306A and drain region 308B are formed from a same material, such as GeSn, as depicted in Figure 3A. In an embodiment, the virtual substrate 302A includes a relaxed layer such as but not limited to relaxed InGaAs or relaxed GeSn. The corresponding indium or tin percent may be selected to tune the strain in the Ge layer 304A. For example, an Sn percentage of approximately 14% or an In percentage of approximately 30% may be used to provide approximately 2.5 GPa of biaxial stress in the Ge layer 304A if the Ge layer 304A is deposited as a blanket film. It is to be understood, however, that due to relaxation caused by forming a vertical wire, higher mismatches may be needed to achieve highly strained Ge in the final device. In an embodiment, by using a square layout, as depicted in Figure 3A, the stress can be made more biaxial. Although not shown, it is to be understood that a gate stack, including a gate dielectric layer and a gate electrode layer, is formed to at least partially, if not completely, surround channel region 306A. In a second example, Figure 3B illustrates an angled view of a portion 300B of another vertical TFET device having a tensile strained Ge region, in accordance with an embodiment of the present invention. Referring to Figure 3B, the TFET device is formed above a strained layer 302B formed above a virtual substrate 30 IB. A germanium source region 304B is included and has tensile strain. Above the germanium source region 304B is a channel region 306B and drain region 308B. In one embodiment, the strained layer 302B, the channel region 306B, and the drain region 308B are formed from a same material, such as strained GeSn, as depicted in Figure 3B. In an embodiment, the virtual substrate 30 IB is a relaxed Ge virtual substrate. In an embodiment, the GeSn layer 302B is formed as a compressively strained layer. The Ge layer 304B is deposited as a strain-free layer and then capped with compressively strained GeSn 306B/308B. In an embodiment, upon patterning such a material stack into a wire, the elastic relaxation of the GeSn stretches the Ge (layer 304B) causing it to be tensile. Although not shown, it is to be understood that a gate stack, including a gate dielectric layer and a gate electrode layer, is formed to at least partially, if not completely, surround channel region 306B. In a third example, Figure 3C illustrates an angled view of a portion 300C of another vertical TFET device having a tensile strained Ge region, in accordance with an embodiment of the present invention. Referring to Figure 3C, the TFET device is formed above a virtual substrate 302C. A germanium source region 304C is included and has tensile strain. Above the germanium source region 30CB is a channel region 306C and drain region 308C. In one embodiment, the channel region 306C and the drain region 308C are formed from a same material, such as strained GeSn, as depicted in Figure 3C. In an embodiment, the virtual substrate 302C is a relaxed GeSn virtual substrate, e.g., having approximately 14% Sn. The Ge layer 304C is a tensile strained Ge, while the GeSn region 306C/308C is compressively strained and has a composition of approximately 28% Sn. It is to be understood that other materials with similar lattice constants may be used instead of GeSn virtual substrate 302C. When the structure 300C is formed into a wire, in an embodiment, the compressive GeSn 306C/308C aids in retaining tensile strain in the Ge layer 304C at the interface. Although not shown, it is to be understood that a gate stack, including a gate dielectric layer and a gate electrode layer, is formed to at least partially, if not completely, surround channel region 306C. In another example of structures for TFET devices which utilize a direct band gap transition, in an embodiment, a TFET device is based on a vertical thin body with a biaxial tensile strained Gei_ ySn yregion used as a source, or a source/channel junction. In one such embodiment, for dimensional considerations, the Gei_ ySn yregion has a vertical dimension approximately in the range of 2 - 4 nanometers. There are a number of possible approaches to achieving a high tensile strain for fabricating a direct gap source region with Gei_ ySn y, an example of which is described below in association with Figure 4. Figure 4 illustrates an angled view of a portion 400 of a vertical TFET device having a tensile strained Gei_ ySn yregion, in accordance with an embodiment of the present invention. Referring to Figure 4, the TFET device is formed above a virtual substrate 402 formed above a substrate 401. A germanium tin (GeSn) source region 404 is included and has tensile strain. Above the GeSn source region 404 is a channel region 406 and drain region 408. In one embodiment, the channel region 406 and drain region 408 are formed from a same material, such as GeSn, as depicted in Figure 4. In an embodiment, the virtual substrate 402 includes a relaxed layer such as but not limited to relaxed InGaAs or relaxed GeSn. The corresponding indium or tin percent may be selected to tune the strain in the GeSn layer 404. Due to relaxation caused by forming a vertical wire, higher mismatches may be needed to achieve highly strained GeSn in the final device. In an embodiment, by using a square layout, as depicted in Figure 4, the stress can be made more biaxial. Although not shown, it is to be understood that a gate stack, including a gate dielectric layer and a gate electrode layer, is formed to at least partially, if not completely, surround channel region 406. In an aspect, then, approaches to achieving an indirect-to-direct band gap transition for fabricating P-type and/or N-type TFETs include the use of wafer orientation and conduction band non-parabolicity effects to increase the conduction band gamma valley mass under confinement in a thin body finfet or nanowire Ge or GeSn TFET. Such approaches provide a conduction band gamma valley energy as the lowest conduction band edge in order to realize a direct band gap. As an example, a conduction band edge at a gamma point is parabolic in zinc blende materials, but away from the band edge it exhibits non-parabolicity based on equation (1): m T= m ro(1 + as) (1) Materials with smaller bandgap exhibit larger non-parabolicity. The non-parabolicity constant a depends on the bandgap and effective mass in the material, as shown in equation (2): For example, for germanium (Ge) gamma point effective mass m* is 0.04 mo, the direct bandgap is 0.8 eV, and the non-parabolicity constant a is 1.15 eV "1. For L-valley edges the non- parabolicity constant is significantly smaller at 0.3 eV "1. In the relaxed Ge bulk band structure, the gamma valley is 0.14 eV above the L-valley, as shown in Figure 5. For such an indirect band gap material band structure, the ballistic current is vanishingly small, and the allowed tunneling processes are phonon-assisted which have a low probability and lead to a low ON current in a relaxed thick body Ge TFET. Figure 5 is a band energy diagram 500 for bulk relaxed Ge at a temperature of approximately 300K, in accordance with an embodiment of the present invention. Referring to plot 500, the band gap is indirect in that the lowest in energy conduction bands are at L-points, and the top valence bands are at gamma points. The band-to-band tunneling process at the source/channel junction is a phonon assisted two-step process with low probability which leads to a low ION in TFETs based on indirect bandgap materials. In a quantum confined structure, the energy ε corresponds to the shift of the band edge energy due to confinement. With stronger confinement in narrow structures, the band energy increases and, therefore, the gamma valley mass increases with a smaller structure size. The L- valley mass increases less with stronger confinement, and gamma valley becomes the lowest conduction band edge at a narrow structure size. To achieve the direct bandgap at the largest minimum structure size, in an embodiment, an optimum wafer orientation for the confinement is used. For example, in a specific embodiment, in bulk Ge there are 8 L- valleys with heavy longitudinal mass ml = 1.56 mo along the <111>, <11-1>, <-l 11>, and <1-11> directions (and along the corresponding opposite directions), and the light transverse mass mt = 0.082 mo along perpendicular directions. The <100> confinement direction in a finfet, or (100) confinement plane in a wire may provide the lightest mass for all L- valleys and, therefore, maximally raise the corresponding energies under confinement. Such raising of the corresponding energies under confinement may allow an indirect to direct transition to be achieved at the largest minimum structure size. In an exemplary embodiment, Figure 6 is a Table 600 of electron masses along different confinement orientations for a finfet device for four L- valleys. Referring to Table 600, conduction band masses (in units of electron mass) in bulk Ge along <001>, <111>, and <1-10> confinement directions are provided for the L- valleys. The gamma valley is isotropic with a mass of 0.04 mo in bulk Ge. With an increased confinement in narrow body TFET devices, the corresponding gamma mass may increase due to the non-parabolicity effect and, at an approximately 5 nanometer body, may become the lowest conduction band leading to the direct bandgap in Ge. In such a situation, a direct ballistic tunneling current may provide a competitive high ION and low SS both in the N- type and P-type Ge unstrained (100) TFETs, as simulated in Figure 7. Figure 7 is a plot 700 of simulated drain current (ID) as a function of gate voltage (VG) for N- and P-type Ge devices, in accordance with an embodiment of the present invention. Referring to Figure 7, simulated ballistic current in a narrow 5 nm body double-gate relaxed (100) Ge homojunction N-Type or P-type TFET is plotted as a function of gate overdrive. For the simulation, Lgate = 40 nm, EOT = 1 nm, source/drain extensions are 20 nm, source/drain dopings are 5el9 cm "3. Relaxed Ge becomes a direct bandgap material due to the narrow body confinement leading to competitive ON current of 1 A/ and a min SS of 12 mV/dec in nTFET or 15 mV/dec in pTFET. The simulation involves the NEGF quantum transport method and the sp3s*d5_SO tight-binding band structure model implemented in an OMEN simulator. It is to be understood that, in accordance with an embodiment of the present invention, further increase of ION can be obtained by using the hetero- structure design with the narrow body direct bandgap material in the source or in the source/channel junction. In another aspect, approaches to achieving an indirect-to-direct band gap transition for fabricating P-type and/or N-type TFETs include the use of tensile strain in Ge, GeSn, or SiGeSn to achieve the direct band gap. As an example, a tensile biaxial stress or tensile uniaxial stress along the principal crystal orientations <100>, <010>, <001> in Ge, GeSn, SiGeSn or a combination of these tensile stresses may be used to achieve the direct bandgap. In an embodiment, the applied mechanical stress breaks crystal symmetries, and splits band degeneracies. In a deformation potential theory, the band edge shifts with applied stress are linearly proportional to strains having deformation potentials as proportionality coefficients. For example, in a specific embodiment, under an applied tensile biaxial strain in bulk Ge, the gamma valley becomes the lowest band edge above 2GPa stress as shown in Figure 8. The corresponding band gap also narrows with stress. Figure 8 is a plot 800 of simulated energy (meV) as a function of biaxial stress (MPa) bulk Ge devices, in accordance with an embodiment of the present invention. Referring to plot 800, band gap narrowing and corresponding energy difference between a conduction band gamma valley edge and the closest conduction band edge of other valleys as a function of the applied biaxial stress in bulk Ge are shown. The calibrated model used applies the deformation potential theory of Bir and Pikus. In a specific embodiment, above approximately 2 GPa of tensile biaxial stress Ge becomes direct and can be used to enhance performance of N-type and P-type TFETs. The above described approach involves use of tensile stress to achieve a direct bandgap material in Ge, GeSn, or SiGeSn in order to engineer high ION and low SS in group IV materials. For example, in an embodiment, under an application of a 2.5GPa tensile biaxial stress in narrow 5 nm body homojunction Ge -N-type and/or and P-type TFETs, the ION at VG=VCC is increased by greater than approximately 5x in both N-type and P-type Ge TFETs, as shown simulated in Figure 9A. In one such embodiment, approximately twice the amount of uniaxial tensile stress is needed to achieve the direct bandgap in Ge. However, less hydrostatic tensile stress may be needed to achieve the direct bandgap in Ge. The Ge P-type TFET with the direct bandgap due to the combined effect of confinement and stress shows an advantage of approximately 3x lower SS than in the simulated 5 nm body ΠΙ-V material P-type TFET, as depicted in Figure 9B. Figure 9A is a plot 900A of simulated drain current (ID) as a function of gate voltage (VG) for N- and P-type Ge devices, in accordance with an embodiment of the present invention. Referring to plot 900A, simulated ballistic drain current is observed in a narrow (100) 5 nm body double-gate relaxed and under 2.5 GPa tensile biaxial strain Ge homojunction N-type or P-type TFET as a function of gate overdrive. For the simulation, Lgate = 40 nm, EOT=l nm, source/drain extensions are 20 nm, source/drain dopings are 5el9 cm "3. Strained Ge is a direct bandgap material leading to ON current gains of greater than approximately 5x over the relaxed material at VG=VCC, while maintaining a low minimum SS of 19 mV/dec in the N-type TFET and 15 mV/dec in the P-type TFET. In an embodiment, further increase of ION can be achieved by using the hetero- structure design with the narrow body strained direct bandgap material in the source. Figure 9B is a plot 900B of simulated drain current (ID) as a function of gate voltage (VG) P-type Ge or ΙΠ-V material devices, in accordance with an embodiment of the present invention. Referring to plot 900B, simulated ballistic drain current is shown for the narrow (100) 5 nm body double-gate under 2.5GPa tensile biaxial strain for a Ge homojunction P-type TFET and for a hetero-junction Ino.5 3Gao.47 As P-type TFET with 4 nm InAs pocket at the source as a function of gate overdrive. For the simulation, Lgate = 40 nm, EOT=l nm, source/drain extensions are 20 nm, source/drain dopings are 5el9 cm "3. As depicted in Figure 9B, and in accordance with an embodiment of the present invention, the Ge-based P-type TFET shows approximately 3x lowering of SS as compared with the ΠΙ-V material-based P-type TFET. In the above described approach to achieving a direct band gap in TFETs, a tensile stress in the finfet or a nanowire is used. The tensile stress effect can be combined with a narrow body confinement effect to maximize the TFET performance. Such an approach can be implemented in planar biaxially strained Ge, GeSn, SiGeSn pseudomorphic films or in narrow body Ge homojunction TFET, or narrow body Ge source - GeSn hetero-structures. In one such embodiment, indirect band gap to direct bandgap transitions due to applied tensile stress in GeSn for Sn content less than approximately 6% can be used. In another aspect, approaches to achieving an indirect-to-direct band gap transition for fabricating P-type and/or N-type TFETs include the use of alloying of Ge with Sn in relaxed GeSn or SiGeSn to achieve a direct band gap. In an example, it is to be understood that Ge is an indirect bandgap material, while Sn is a metal. During alloying Ge with Sn, the resulting GeSn undergoes an indirect-band gap-direct- band gap transition for Sn concentrations above approximately 6%-10%. In accordance with an embodiment of the present invention, the direct and indirect bandgap in GeSn vs Sn content calculated using the Jaros' band offset theory are shown in Figure 10A. The transition for the Gei_ x_ ySi xSn yternary alloy is shown in Figure 10B. Referring to Figures 10A and 10B, band gaps of Gei_ zSn zat L, gamma, and X conduction band valleys versus Sn composition of z shows the indirect-to direct bandgap transition above 6% of Sn. The lowest (either direct or indirect) band gap of relaxed Gei_ x_ ySi xSn yalloys may be calculated by empirical pseudopotential method. For such an approach, the alloy GeSn or SiGeSn is used to provide a direct band gap in TFETs. The alloy effect may be combined with a narrow body confinement effect, and tensile stress effect to maximize the TFET performance. The approach may be implemented in relaxed GeSn, SiGeSn films in narrow body homojunction TFET, or narrow body GeSn/SiGeSn source - GeSn/Ge/SiGe hetero-structures. In another aspect, approaches are provided to achieve stress in TFET devices which utilize direct band gap transitions under an applied stress. As an example, Figure 11 A is a plot 1100 A depicting stress simulation of the structure shown in Figure 3 A for varying wire dimensions, in accordance with an embodiment of the present invention. Referring to plot 1100A, the two in plane components of the stress are plotted for the case where the deposited Ge film has 2% mismatch strain with the virtual substrate. For smaller size wires mismatch greater than 2% would be needed to achieve greater than approximately 2.5 GPa stress in the wires. In another example, Figure 1 IB is a plot 1100B depicting stress simulation of the structure shown in Figure 3B, in accordance with an embodiment of the present invention. Referring to plot 1100B, the compressively strained GeSn layers cause the Ge to be stretched out as they elastically relax causing tensile Ge. In this case, the GeSn has 2% compressive strain to begin with, as grown on a virtual substrate. The two in plane stresses (in dynes/cn^) are shown for the Ge layers only. It is to be understood that higher stresses may be achieved by increasing the mismatch to the virtual substrate. One option may be to use relaxed SiGe virtual substrates instead of Ge. Such an approach may be needed for the smallest wire dimensions. In another example, Figure 11C is a plot 1100C depicting stress simulation of the structure shown in Figure 3C, in accordance with an embodiment of the present invention. Referring to plot 1 lOOC, this approach results in large tensile stresses of greater than approximately 2.5 GPa at the source/channel interface. This may allow the use of lower Sn concentrations in the GeSn layers. In the above described embodiments, whether formed on virtual substrate layers or on bulk substrates, an underlying substrate used for TFET device manufacture may be composed of a semiconductor material that can withstand a manufacturing process. In an embodiment, the substrate is a bulk substrate, such as a P-type silicon substrate as is commonly used in the semiconductor industry. In an embodiment, substrate is composed of a crystalline silicon, silicon/germanium or germanium layer doped with a charge carrier, such as but not limited to phosphorus, arsenic, boron or a combination thereof. In one embodiment, the concentration of silicon atoms in the substrate is greater than 97% or, alternatively, the concentration of dopant atoms is less than 1%. In another embodiment, the substrate is composed of an epitaxial layer grown atop a distinct crystalline substrate, e.g. a silicon epitaxial layer grown atop a boron-doped bulk silicon mono-crystalline substrate. The substrate may instead include an insulating layer disposed in between a bulk crystal substrate and an epitaxial layer to form, for example, a silicon-on-insulator substrate. In an embodiment, the insulating layer is composed of a material such as, but not limited to, silicon dioxide, silicon nitride, silicon oxy-nitride or a high-k dielectric layer. The substrate may alternatively be composed of a group III-V material. In an embodiment, the substrate is composed of a ΠΙ-V material such as, but not limited to, gallium nitride, gallium phosphide, gallium arsenide, indium phosphide, indium antimonide, indium gallium arsenide, aluminum gallium arsenide, indium gallium phosphide, or a combination thereof. In another embodiment, the substrate is composed of a ΠΙ-V material and charge-carrier dopant impurity atoms such as, but not limited to, carbon, silicon, germanium, oxygen, sulfur, selenium or tellurium. In the above embodiments, TFET devices include source drain regions that may be doped with charge carrier impurity atoms. In an embodiment, the group IV material source and/or drain regions include N-type dopants such as, but not limited to phosphorous or arsenic. In another embodiment, the group IV material source and/or drain regions include P-type dopants such as, but not limited to boron. In the above embodiments, although not always shown, it is to be understood that the TFETs would further include gate stacks. The gate stacks include a gate dielectric layer and a gate electrode layer. In an embodiment, the gate electrode of gate electrode stack is composed of a metal gate and the gate dielectric layer is composed of a high-K material. For example, in one embodiment, the gate dielectric layer is composed of a material such as, but not limited to, hafnium oxide, hafnium oxy-nitride, hafnium silicate, lanthanum oxide, zirconium oxide, zirconium silicate, tantalum oxide, barium strontium titanate, barium titanate, strontium titanate, yttrium oxide, aluminum oxide, lead scandium tantalum oxide, lead zinc niobate, or a combination thereof. Furthermore, a portion of gate dielectric layer may include a layer of native oxide formed from the top few layers of the corresponding channel region. In an embodiment, the gate dielectric layer is composed of a top high-k portion and a lower portion composed of an oxide of a semiconductor material. In one embodiment, the gate dielectric layer is composed of a top portion of hafnium oxide and a bottom portion of silicon dioxide or silicon oxy-nitride. In an embodiment, the gate electrode is composed of a metal layer such as, but not limited to, metal nitrides, metal carbides, metal silicides, metal aluminides, hafnium, zirconium, titanium, tantalum, aluminum, ruthenium, palladium, platinum, cobalt, nickel or conductive metal oxides. In a specific embodiment, the gate electrode is composed of a non-workfunction- setting fill material formed above a metal workfunction-setting layer. In an embodiment, the gate electrode is composed of a P-type or N-type material. The gate electrode stack may also include dielectric spacers. The TFET semiconductor devices described above cover both planar and non-planar devices, including gate-all-around devices. Thus, more generally, the semiconductor devices may be a semiconductor device incorporating a gate, a channel region and a pair of source/drain regions. In an embodiment, semiconductor device is one such as, but not limited to, a MOS- FET. In one embodiment, semiconductor device is a planar or three-dimensional MOS-FET and is an isolated device or is one device in a plurality of nested devices. As will be appreciated for a typical integrated circuit, both N- and P-channel transistors may be fabricated on a single substrate to form a CMOS integrated circuit. Furthermore, additional interconnect wiring may be fabricated in order to integrate such devices into an integrated circuit. Generally, one or more embodiments described herein are targeted at tunneling field effect transistors (TFETs) for CMOS architectures and approaches to fabricating N-type and P- type TFETs. Group IV active layers for such devices may be may be formed by techniques such as, but not limited to, chemical vapor deposition (CVD) or molecular beam epitaxy (MBE), or other like processes. Figure 12 illustrates a computing device 1200 in accordance with one implementation of the invention. The computing device 1200 houses a board 1202. The board 1202 may include a number of components, including but not limited to a processor 1204 and at least one communication chip 1206. The processor 1204 is physically and electrically coupled to the board 1202. In some implementations the at least one communication chip 1206 is also physically and electrically coupled to the board 1202. In further implementations, the communication chip 1206 is part of the processor 1204. Depending on its applications, computing device 1200 may include other components that may or may not be physically and electrically coupled to the board 1202. These other components include, but are not limited to, volatile memory (e.g., DRAM), non- volatile memory (e.g., ROM), flash memory, a graphics processor, a digital signal processor, a crypto processor, a chipset, an antenna, a display, a touchscreen display, a touchscreen controller, a battery, an audio codec, a video codec, a power amplifier, a global positioning system (GPS) device, a compass, an accelerometer, a gyroscope, a speaker, a camera, and a mass storage device (such as hard disk drive, compact disk (CD), digital versatile disk (DVD), and so forth). The communication chip 1206 enables wireless communications for the transfer of data to and from the computing device 1200. The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a non- solid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they might not. The communication chip 1206 may implement any of a number of wireless standards or protocols, including but not limited to Wi-Fi (IEEE 802.11 family), WiMAX (IEEE 802.16 family), IEEE 802.20, long term evolution (LTE), Ev- DO, HSPA+, HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth, derivatives thereof, as well as any other wireless protocols that are designated as 3G, 4G, 5G, and beyond. The computing device 1200 may include a plurality of communication chips 1206. For instance, a first communication chip 1206 may be dedicated to shorter range wireless communications such as Wi-Fi and Bluetooth and a second communication chip 1206 may be dedicated to longer range wireless communications such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, Ev-DO, and others. The processor 1204 of the computing device 1200 includes an integrated circuit die packaged within the processor 1204. In some implementations of the invention, the integrated circuit die of the processor includes one or more devices, such as tunneling field effect transistors (TFETs) built in accordance with implementations of the invention. The term "processor" may refer to any device or portion of a device that processes electronic data from registers and/or memory to transform that electronic data into other electronic data that may be stored in registers and/or memory. The communication chip 1206 also includes an integrated circuit die packaged within the communication chip 1206. In accordance with another implementation of the invention, the integrated circuit die of the communication chip includes one or more devices, such as tunneling field effect transistors (TFETs) built in accordance with implementations of the invention. In further implementations, another component housed within the computing device 1200 may contain an integrated circuit die that includes one or more devices, such as tunneling field effect transistors (TFETs) built in accordance with implementations of the invention. In various implementations, the computing device 1200 may be a laptop, a netbook, a notebook, an ultrabook, a smartphone, a tablet, a personal digital assistant (PDA), an ultra mobile PC, a mobile phone, a desktop computer, a server, a printer, a scanner, a monitor, a set-top box, an entertainment control unit, a digital camera, a portable music player, or a digital video recorder. In further implementations, the computing device 1200 may be any other electronic device that processes data. Thus, embodiments of the present invention include tunneling field effect transistors (TFETs) for CMOS architectures and approaches to fabricating N-type and P-type TFETs. In an embodiment, a tunneling field effect transistor (TFET) includes a homojunction active region disposed above a substrate. The homojunction active region includes a relaxed Ge or GeSn body having an undoped channel region therein. The homojunction active region also includes doped source and drain regions disposed in the relaxed Ge or GeSn body, on either side of the channel region. The TFET also includes a gate stack disposed on the channel region, between the source and drain regions. The gate stack includes a gate dielectric portion and gate electrode portion. In one embodiment, the relaxed Ge or GeSn body is a direct band gap body and has a thickness of, or less than, approximately 5 nanometers. In one embodiment, the TFET is a finfet, frigate or square nanowire-based device. In one embodiment, the doped source and drain regions include N-type dopants and the TFET is an N-type device. In one embodiment, the doped source and drain regions include P-type dopants and the TFET is a P-type device. In an embodiment, a tunneling field effect transistor (TFET) includes a hetero-junction active region disposed above a substrate. The hetero-junction active region includes a relaxed body having a Ge or GeSn portion and a lattice matched Group ΠΙ-V material portion and having an undoped channel region in both the Ge or GeSn portion and the lattice matched Group ΙΠ-V material portion. A doped source region is disposed in the Ge or GeSn portion of the relaxed body, on a first side of the channel region. A doped drain region is disposed in the Group III-V material portion of the relaxed body, on a second side of the channel region. The TFET also includes a gate stack disposed on the channel region, between the source and drain regions. The gate stack includes a gate dielectric portion and gate electrode portion. In one embodiment, the Ge or GeSn portion of the relaxed body is a Ge portion, and the lattice matched Group ΙΠ-V material portion is a GaAs or Gao.5Ino.5P portion. In one embodiment, the relaxed body is a direct band gap body and has a thickness of, or less than, approximately 5 nanometers. In one embodiment, the TFET is a finfet, frigate or square nanowire-based device. In one embodiment, the doped source and drain regions include N-type dopants and the TFET is an N-type device. In one embodiment, the doped source and drain regions include P-type dopants and the TFET is a P-type device. In an embodiment, a tunneling field effect transistor (TFET) includes a homojunction active region disposed above a relaxed substrate. The homojunction active region includes a biaxially tensile strained Ge or Gei_ ySn ybody having an undoped channel region therein. Doped source and drain regions are disposed in the biaxially tensile strained Ge or Gei_ ySn ybody, on either side of the channel region. The TFET also includes a gate stack disposed on the channel region, between the source and drain regions. The gate stack includes a gate dielectric portion and gate electrode portion. In one embodiment, the relaxed substrate is a Gei_ xSn x(x > y) or In xGai_ xAs substrate. In one embodiment, the biaxially tensile strained Ge or Gei_ ySn ybody is a direct band gap body and has a thickness of, or less than, approximately 5 nanometers. In one embodiment, the TFET is a planar, finfet, trigate or square nanowire-based device. In one embodiment, the TFET is a finfet or trigate device, with strained Ge or Gei_ ySn ybody with uniaxial tensile stress along a crystal orientation of <100>, <010> or <001>. In one embodiment, the doped source and drain regions include N-type dopants and the TFET is an N-type device. In one embodiment, the doped source and drain regions include P-type dopants and the TFET is a P-type device. In an embodiment, a tunneling field effect transistor (TFET) includes a hetero-junction active region disposed above a substrate. The hetero-junction active region includes a vertical nanowire having a lower Ge portion and an upper GeSn portion and having an undoped channel region in only the GeSn portion. A doped source region is disposed in the Ge portion of the vertical nanowire, below the channel region. A doped drain region is disposed in the GeSn portion of the vertical nanowire, above the channel region. The TFET also includes a gate stack disposed surrounding the channel region, between the source and drain regions. The gate stack includes a gate dielectric portion and gate electrode portion. In one embodiment, the lower Ge portion of the vertical nanowire is disposed on a virtual substrate portion of the substrate, and the virtual substrate is a relaxed InGaAs or relaxed GeSn virtual substrate. In one embodiment, the lower Ge portion of the vertical nanowire is disposed on a compressively strained GeSn layer. In one embodiment, the lower Ge portion of the vertical nanowire is disposed on a larger Ge region disposed on a virtual substrate portion of the substrate, and the virtual substrate is a relaxed GeSn virtual substrate. In one embodiment, the GeSn virtual substrate is composed of approximately 14% Sn, and the upper GeSn portion of the vertical nanowire is compressively strained and is composed of approximately 28% Sn. In one embodiment, the lower Ge portion has tensile strain. In one embodiment, from a top-down perspective, the vertical nanowire has an approximately square geometry, and the tensile strain is a biaxial tensile strain. In one embodiment, the lower Ge portion has a vertical dimension approximately in the range of 2 - 4 nanometers. In one embodiment, the doped source and drain regions include N-type dopants and the TFET is an N-type device. In one embodiment, the doped source and drain regions include P-type dopants and the TFET is a P-type device. In an embodiment, a tunneling field effect transistor (TFET) includes a hetero-junction active region disposed above a substrate. The hetero-junction active region includes a vertical nanowire having a lower tensile strained Gei_ ySn yportion and an upper Gei_ xSn xportion and having an undoped channel region in only the Gei_ xSn xportion, where x > y. A doped source region is disposed in the Gei_ ySn yportion of the vertical nanowire, below the channel region. A doped drain region is disposed in the Gei_ xSn xportion of the vertical nanowire, above the channel region. A gate stack is disposed surrounding the channel region, between the source and drain regions. The gate stack includes a gate dielectric portion and gate electrode portion. In one embodiment, the lower tensile strained Gei_ ySn yportion of the vertical nanowire is disposed on a virtual substrate portion of the substrate, and the virtual substrate is a relaxed InGaAs or relaxed GeSn virtual substrate. |
The invention discloses a system architecture for cloud gaming. Described herein is a cloud-based gaming system in which graphics processing operations of a cloud-based game can be performed on a client device. Client-based graphics processing can be enabled in response to a determination that the client includes a graphics processor having a performance that exceeds a minimum threshold. When a game is remotely executed and streamed to a client, the client is configurable to provide network feedback that can be used to adjust execution and/or encoding for the game. |
1.One method includes:Map applications via the encapsulation layer for execution by processing resources selected from a set of processing resources, the set of processing resources including processing resources of a server device of the cloud game system and processing resources of a client device of the cloud game system ;Execute the application on the processing resource mapped via the encapsulation layer via the encapsulation layer; andStream the output of the execution of the application to the client application of the cloud gaming system.2.The method of claim 1, further comprising:Import the application into the storage associated with the cloud gaming system.3.The method of claim 1, further comprising:Encapsulate the application in the encapsulation layer, and the encapsulation layer can be configured to enable the server device of the cloud game system and the client device of the cloud game system to access the application Choose Execute.4.The method of claim 1, wherein the encapsulated application includes core logic and a plurality of encapsulations associated with the encapsulation layer, and wherein the encapsulation layer is configured to selectively relay Describe the API commands made by the core logic.5.The method of claim 4, wherein the plurality of envelopes associated with the envelope layer includes: file system envelope, input device envelope, graphics programming interface envelope, audio device envelope, and System interface encapsulation.6.The method of claim 1, wherein mapping the application via the encapsulation layer comprises: mapping an encapsulation to a resource selected from a resource set, the resource set including resources of a host device or resources of a remote device resource.7.The method of claim 1, further comprising:Mapping the encapsulation layer of the first instance of the application for execution by the client of the cloud game system;Initiating transmission of data associated with the first instance of the application to the client of the cloud gaming system;Mapping the encapsulation layer of the second instance of the application for execution by the server of the cloud gaming system; andInitiating the execution of the second instance of the application on the server of the cloud gaming system.8.The method of claim 7, wherein streaming the output of the execution of the application to the client application of the cloud gaming system comprises: in the data associated with the first instance of the application Streams the output of the second instance of the application during its transmission.9.8. The method of claim 7, further comprising: providing network feedback to the server of the cloud gaming system during the execution of the second instance of the application.10.The method of claim 7, further comprising:After completing the transmission of the data associated with the first instance of the application, initiate the execution of the first instance of the application on the client of the cloud gaming system; andStream the execution of the first instance of the application to the client application of the cloud gaming system.11.The method of claim 10, wherein the first instance of the application is executed on the first client of the cloud gaming system, and the client application of the cloud gaming system is executed on the Execute on the second client of the cloud gaming system.12.A non-transitory machine-readable medium storing instructions, which, when executed by one or more processors, cause the one or more processors to execute any one of claims 1-11 method.13.A data processing system, comprising a device for executing the method according to any one of claims 1-11.14.A device comprising means for performing the method according to any one of claims 1-11. |
System architecture for cloud gamingcross referenceThis application claims the priority rights of U.S. Provisional Application No. 62/972,180 and U.S. Provisional Application No. 62/972,197 each filed on February 10, 2020, and the entire contents of these two applications are incorporated herein by reference. This application further claims priority rights in U.S. Provisional Application No. 63/064,141 filed on August 11, 2020, which is incorporated herein by reference.Background techniqueCloud-based gaming systems enable potentially graphics-intensive 3D gaming applications to be experienced across a variety of devices, including devices with limited graphics processing capabilities. The game application can be executed on one or more server devices. The input received at the client device is transmitted to the server device and provided to the game application being executed. Then, the response to these inputs is returned to the client device. The response can be provided in the form of a stream of encoded video frames that are decoded by the client device for display. Although current cloud gaming systems based on video streaming enable games to be experienced on various client devices, client devices with powerful graphics processing capabilities may not be fully utilized.Description of the drawingsTherefore, in order to understand the features of the current embodiment set forth above in detail, a more specific description of the above briefly summarized embodiment can be made with reference to the embodiment. The accompanying drawings illustrate the Some.Fig. 1 is a block diagram of a processing system according to an embodiment;Figures 2A-2D illustrate computing systems and graphics processors provided by the embodiments described herein;3A-3C illustrate block diagrams of additional graphics processor and computing accelerator architectures provided by the embodiments described herein;Figure 4 is a block diagram of a graphics processing engine of a graphics processor according to some embodiments;5A-5B illustrate thread execution logic according to embodiments described herein, the thread execution logic including an array of processing elements employed in a graphics processor core;Figure 6 illustrates an additional execution unit according to an embodiment;Figure 7 is a block diagram illustrating a graphics processor instruction format according to some embodiments;FIG. 8 is a block diagram of a graphics processor according to another embodiment;Figures 9A-9B illustrate graphics processor command formats and command sequences according to some embodiments;Figure 10 illustrates an exemplary graphics software architecture for a data processing system according to some embodiments;FIG. 11A is a block diagram illustrating an IP core development system according to an embodiment;Figure 11B illustrates a cross-sectional side view of an integrated circuit package assembly according to some embodiments described herein;Figure 11C illustrates a package assembly including a plurality of unit hardware logic chiplets connected to a substrate;FIG. 11D illustrates a package assembly including interchangeable chiplets according to an embodiment;FIG. 12 illustrates an exemplary integrated circuit that can be manufactured using one or more IP cores according to various embodiments described herein;13A-13B illustrate exemplary graphics processors that can be manufactured using one or more IP cores according to various embodiments described herein;Figure 14 illustrates the frame encoding and decoding used in the cloud gaming system;Figure 15 illustrates a cloud gaming system, where game servers are distributed across multiple cloud and data center systems;Figure 16 illustrates a cloud gaming system in which cloud-based, edge-based, or client-based computing resources can be used to perform graphics processing operations;Figures 17A-17B illustrate a system and method for encapsulating a game application so that the game can be played on a server and/or client device;FIG. 18 illustrates an exemplary server according to an embodiment;Figure 19 illustrates a hybrid file system that can be used to achieve a consistent gaming experience for locally executed games and remotely executed games;Figure 20 illustrates a cloud gaming system where command streams from multiple games can be combined into a single context.Figure 21 illustrates a cloud gaming system for implementing GPU sharing across multiple server devices;Figure 22 illustrates a cloud gaming system including end-to-end path optimization;Figures 23A-23B illustrate a method for configuring local execution or remote execution of cloud-based games;FIG. 24 is a block diagram of a computing device including a graphics processor according to an embodiment.Detailed waysDescribed in this article is a cloud gaming system in which cloud-based, edge-based, or client-based computing resources can be used to perform graphics processing operations. If the client network environment includes a client with sufficient graphics processing resources to remotely execute the game, the game server stack can be downloaded by the client, and the game server can be executed locally on the client. During the download of the game server stack to the client, the game can be executed by a remote server, and the rendered output can be streamed to the client.For the purpose of explanation, numerous specific details are stated to provide a thorough understanding of the various embodiments described below. However, it will be obvious to those skilled in the art that the embodiments can be practiced without some of these specific details. In other instances, well-known structures and devices are illustrated in the form of block diagrams to avoid obscuring the basic principles and provide a more thorough understanding of the embodiments. Although some of the following embodiments are described with reference to a graphics processor, the techniques and teachings described herein can be applied to various types of circuits or semiconductor devices, including general-purpose processing devices or graphics processing devices. Reference herein to "one embodiment" or "embodiment" indicates that a particular feature, structure, or characteristic described in conjunction with or in association with the embodiment may be included in at least one of such embodiments. However, appearances of the phrase "in one embodiment" in different places in this specification do not necessarily all refer to the same embodiment.In the following description and claims, the terms "coupled" and "connected" and their derivatives may be used. It should be understood that these terms are not intended as synonyms for each other. "Coupled" is used to indicate that two or more elements that may or may not be in direct physical or electrical contact with each other cooperate or interact with each other. "Connected" is used to indicate the establishment of communication between two or more elements that are coupled to each other.In the following description, FIGS. 1 to 12 and 13A to 13B provide an overview of exemplary data processing systems and graphics processor logic covering or related to each embodiment. Figures 14-23 provide specific details of each embodiment. Some aspects of the following embodiments are described with reference to a graphics processor, while other aspects are described with reference to a general-purpose processor such as a central processing unit (CPU). Similar techniques and teachings can be applied to other types of circuits or semiconductor devices, including but not limited to one or more instances of integrated many-core processors, GPU clusters, or field programmable gate arrays (FPGAs). Generally speaking, the teachings are applicable to any processor or machine that manipulates or processes images (e.g., samples, pixels), vertex data, or geometric data.System overviewFIG. 1 is a block diagram of a processing system 100 according to an embodiment. The processing system 100 can be used in the following: a single-processor desktop system, a multi-processor workstation system, or a server system with a large number of processors 102 or processor cores 107. In one embodiment, the processing system 100 is a processing platform incorporated in a system-on-chip (SoC) integrated circuit for use in mobile devices, handheld devices, or embedded devices Use, such as for use in Internet of Things (IoT) devices with wired or wireless connectivity to a local area network or a wide area network.In one embodiment, the processing system 100 may include the following, may be coupled with, or may be incorporated in the following: server-based gaming platforms, gaming consoles including games and media consoles, mobile Game console, handheld game console, or online game console. In some embodiments, the processing system 100 is part of a mobile phone, smart phone, tablet computing device, or mobile Internet connected device (such as a notebook with low internal storage capacity). The processing system 100 may also include, be coupled with, or be integrated in the following: wearable devices, such as smart watch wearable devices; utilizing augmented reality (AR) or virtual reality (VR) features To enhance to provide visual, audio or tactile output to supplement the real-world visual, audio or tactile experience or otherwise provide text, audio, graphics, video, holographic images or video, or tactile feedback smart glasses or clothing; other augmented reality (AR) equipment; or other virtual reality (VR) equipment. In some embodiments, the processing system 100 includes, or is part of, a television or set-top box device. In one embodiment, the processing system 100 may include an autonomous vehicle, coupled with, or integrated in an autonomous vehicle, such as a bus, a tractor trailer, a car, an electric motor, or electric power. Loop, airplane or glider (or any combination thereof). An autonomous vehicle may use the processing system 100 to process the environment sensed around the vehicle.In some embodiments, each of the one or more processors 102 includes one or more processor cores 107 for processor instructions that, when executed, perform operations for system or user software. In some embodiments, at least one of the one or more processor cores 107 is configured to process a specific instruction set 109. In some embodiments, the instruction set 109 may facilitate complex instruction set computing (CISC), reduced instruction set computing (RISC), or computation via very long instruction words (VLIW). One or more processor cores 107 may process different instruction sets 109, and different instruction sets 109 may include instructions for facilitating the simulation of other instruction sets. The processor core 107 may also include other processing devices, such as a digital signal processor (DSP).In some embodiments, the processor 102 includes a cache memory 104. Depending on the architecture, the processor 102 may have a single internal cache or multiple levels of internal cache. In some embodiments, the cache memory is shared among the various components of the processor 102. In some embodiments, the processor 102 also uses an external cache (eg, level 3 (L3) cache or last level cache (LLC)) (not shown), and may use known cache coherency techniques This external cache is shared among the processor cores 107. The register file 106 may be additionally included in the processor 102, and the register file 106 may include different types of registers for storing different types of data (for example, integer registers, floating point registers, status registers, and instruction pointer registers). Some registers may be general-purpose registers, while other registers may be dedicated to the design of the processor 102.In some embodiments, one or more processors 102 are coupled with one or more interface buses 110 to transmit communication signals, such as address, data, or control, between the processor 102 and other components in the processing system 100. Signal. In one embodiment, the interface bus 110 may be a processor bus, such as some version of the Direct Media Interface (DMI) bus. However, the processor bus is not limited to the DMI bus, and may include one or more peripheral component interconnection buses (for example, PCI, PCI express), memory bus, or other types of interface buses. In one embodiment, the processor(s) 102 includes an integrated memory controller 116 and a platform controller hub 130. The memory controller 116 facilitates communication between the memory device and other components of the processing system 100, while the platform controller hub (PCH) 130 provides the connection to the I/O device via the local I/O bus.The memory device 120 may be a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, a flash memory device, a phase change memory device, or some other memory device with appropriate performance to act as a process memory. In one embodiment, the memory device 120 may operate as a system memory for the processing system 100 to store data 122 and instructions 121 for use when one or more processors 102 execute applications or processes. The memory controller 116 is also coupled with an optional external graphics processor 118, which can communicate with one or more graphics processors 108 in the processor 102 to perform graphics operations and media operations. In some embodiments, graphics operations, media operations, or computing operations can be assisted by an accelerator 112, which is a coprocessor that can be configured to perform a collection of professional graphics operations, media operations, or computing operations. For example, in one embodiment, the accelerator 112 is a matrix multiplication accelerator for optimizing machine learning or computing operations. In one embodiment, the accelerator 112 is a ray tracing accelerator, and the ray tracing accelerator can be used to perform ray tracing operations in concert with the graphics processor 108. In one embodiment, the external accelerator 119 may be used instead of the accelerator 112, or the external accelerator 119 may be used in concert with the accelerator 112.In some embodiments, the display device 111 may be connected to the processor(s) 102. The display device 111 may be one or more of the following: an internal display device, such as in a mobile electronic device or a laptop device; or an external display device attached via a display interface (for example, a display port, etc.). In one embodiment, the display device 111 may be a head-mounted display (HMD), such as a stereoscopic display device for use in a virtual reality (VR) application or an augmented reality (AR) application.In some embodiments, the platform controller hub 130 enables peripheral devices to be connected to the memory device 120 and the processor 102 via a high-speed I/O bus. I/O peripherals include but are not limited to audio controller 146, network controller 134, firmware interface 128, wireless transceiver 126, touch sensor 125, data storage device 124 (for example, non-volatile memory, volatile memory , Hard drives, flash memory, NAND, 3D NAND, 3D XPoint, etc.). The data storage device 124 may be connected via a storage interface (for example, SATA) or via a peripheral bus such as a peripheral component interconnection bus (for example, PCI, PCI express). The touch sensor 125 may include a touch screen sensor, a pressure sensor, or a fingerprint sensor. The wireless transceiver 126 may be a Wi-Fi transceiver, a Bluetooth transceiver, or a mobile network transceiver, such as a 3G, 4G, 5G, or Long Term Evolution (LTE) transceiver. The firmware interface 128 enables communication with system firmware, and may be, for example, a unified extensible firmware interface (UEFI). The network controller 134 may enable the network connection to the wired network. In some embodiments, a high-performance network controller (not shown) is coupled with the interface bus 110. In one embodiment, the audio controller 146 is a multi-channel high-definition audio controller. In one embodiment, the processing system 100 includes an optional legacy I/O controller 140 for coupling legacy (eg, Personal System 2 (PS/2)) devices to the system. The platform controller hub 130 may also be connected to one or more Universal Serial Bus (USB) controllers 142 to connect input devices, such as a keyboard and mouse 143 combination, a camera 144, or other USB input devices.It will be understood that the processing system 100 shown is exemplary and not restrictive, as other types of data processing systems configured in different ways may also be used. For example, an instance of the memory controller 116 and the platform controller hub 130 may be integrated into a separate external graphics processor, such as the external graphics processor 118. In one embodiment, the platform controller hub 130 and/or the memory controller 116 may be external to the one or more processors 102. For example, the processing system 100 may include an external memory controller 116 and a platform controller hub 130, and the external memory controller 116 and the platform controller hub 130 may be configured to be in a system chipset in communication with the processor(s) 102 The memory controller hub and peripheral controller hub.For example, a circuit board ("sled") may be used, and components placed on the circuit board (such as CPU, memory, and other components) are designed to achieve enhanced thermal performance. In some examples, processing components such as processors are located on the top side of the skid, and nearby memory such as DIMMs are located on the bottom side of the skid. As a result of the enhanced airflow provided by this design, the components can be operated at higher frequencies and power levels than typical systems, thereby increasing performance. In addition, the skid boards are configured to blindly match the power and data communication cables in the rack, thereby enhancing their ability to be quickly removed, upgraded, reinstalled, and/or replaced. Similarly, the various components on the skid board, such as processors, accelerators, memory, and data storage devices, are configured to be easily upgraded due to their increased spacing relative to each other. In the illustrative embodiment, the components additionally include hardware authentication features for proving their authenticity.Data centers can utilize a single network structure ("fabric") that supports multiple other network architectures, including Ethernet and omnidirectional paths. The skid can be coupled to the switch via optical fiber, which provides higher bandwidth and lower latency than typical twisted-pair cabling (eg, Category 5, Category 5e, Category 6, etc.). Due to the high-bandwidth, low-latency interconnection and network architecture, data centers can focus on physically disbanded memory, accelerators (for example, GPU, graphics accelerator, FPGA, ASIC, neural network, and/or artificial intelligence accelerator) in use Etc.) and data storage drives, and provide them to computing resources (for example, processors) as needed, so that the computing resources can access these concentrated resources as if the concentrated resources are locally.The power supply or power source may provide voltage and/or current to the processing system 100 or any of the components or systems described herein. In one example, the power supply includes an AC-DC (Alternating Current-Direct Current) adapter for plugging into a wall socket. Such AC power may be a renewable energy (e.g., solar) power source. In one example, the power source includes a DC power source, such as an external AC-DC converter. In one example, the power source or power supply includes wireless charging hardware for charging through proximity to a charging field. In one example, the power source may include an internal battery, an AC supply, an action-based power supply, a solar power supply, or a fuel cell source.Figures 2A-2D illustrate computing systems and graphics processors provided by the embodiments described herein. Those elements of FIGS. 2A-2D that have the same reference numerals (or names) as the elements of any other figures in this document can operate or function in any manner similar to those described elsewhere in this document, but are not limited to this .2A is a block diagram of an embodiment of a processor 200 having one or more processor cores 202A-202N, an integrated memory controller 214, and an integrated graphics processor 208. The processor 200 may include additional cores up to and including the additional core 202N represented by the dashed box and the additional core 202N represented by the dashed box. Each of the processor cores 202A-202N includes one or more internal cache units 204A-204N. In some embodiments, each processor core also has access to one or more shared cache units 206. The internal cache units 204A-204N and the shared cache unit 206 represent the cache memory hierarchical structure within the processor 200. The cache memory hierarchy may include at least one level of instruction and data caches within each processor core and one or more levels of shared intermediate caches, such as level 2 (L2), level 3 (L3) , Level 4 (L4), or other levels of cache, where the highest level of cache before the external memory is classified as LLC. In some embodiments, the cache coherency logic maintains coherency between each cache unit 206 and 204A-204N.In some embodiments, the processor 200 may also include a set 216 of one or more bus controller units and a system agent core 210. One or more bus controller units 216 manage a collection of peripheral buses, such as one or more PCI buses or PCI Express buses. The system agent core 210 provides management functions for each processor component. In some embodiments, the system agent core 210 includes one or more integrated memory controllers 214 for managing access to various external memory devices (not shown).In some embodiments, one or more of the processor cores 202A-202N includes support for simultaneous multithreading. In such embodiments, the system agent core 210 includes components for coordinating and operating the cores 202A-202N during multithreading. The system agent core 210 may additionally include a power control unit (PCU) that includes logic and components for adjusting the power state of the processor cores 202A-202N and the graphics processor 208.In some embodiments, the processor 200 additionally includes a graphics processor 208 for performing graphics processing operations. In some embodiments, the graphics processor 208 is coupled with the set 206 of shared cache units and with the system proxy core 210, which includes one or more integrated memory controllers 214. In some embodiments, the system agent core 210 also includes a display controller 211 for driving graphics processor output to one or more coupled displays. In some embodiments, the display controller 211 may also be a separate module coupled with the graphics processor via at least one interconnection, or may be integrated in the graphics processor 208.In some embodiments, the ring-based interconnection unit 212 is used to couple the internal components of the processor 200. However, alternative interconnection units may be used, such as point-to-point interconnection, switched interconnection, or other technologies, including those known in the art. In some embodiments, the graphics processor 208 is coupled to the ring interconnect 212 via an I/O link 213.Exemplary I/O link 213 represents at least one of a variety of I/O interconnections, including facilitating communication between various processor components and high-performance embedded memory modules 218 (such as eDRAM modules) I/O interconnection on the communication package. In some embodiments, each of the processor cores 202A-202N and the graphics processor 208 can use the embedded memory module 218 as a shared last-level cache.In some embodiments, the processor cores 202A-202N are homogeneous cores that execute the same instruction set architecture. In another embodiment, the processor cores 202A-202N are heterogeneous in terms of instruction set architecture (ISA), where one or more of the processor cores 202A-202N execute the first instruction set, and the other cores At least one executes a subset of the first instruction set or a different instruction set. In one embodiment, the processor cores 202A-202N are heterogeneous in terms of microarchitecture, where one or more cores with relatively high power consumption are coupled with one or more power cores with relatively low power consumption. In one fact, the processor cores 202A-202N are heterogeneous in terms of computing power. In addition, the processor 200 may be implemented on one or more chips, or implemented as an SoC integrated circuit having the illustrated components in addition to other components.FIG. 2B is a block diagram of the hardware logic of the graphics processor core 219 according to some embodiments described herein. Those elements of FIG. 2B that have the same reference numbers (or names) as the elements of any other figures in this document can operate or function in any manner similar to those described elsewhere in this document, but are not limited thereto. The graphics processor core 219 (sometimes referred to as a core slice) may be one or more graphics cores within a modular graphics processor. An example of the graphics processor core 219 is a graphics core slice, and based on the target power envelope and the performance envelope, the graphics processor as described herein may include multiple graphics core slices. Each graphics processor core 219 may include a fixed function block 230, which is coupled with a plurality of sub-cores 221A-221F (also called sub-slices), and the plurality of sub-cores 221A-221F include modular general-purpose and fixed-function logic Block.In some embodiments, the fixed function block 230 includes a geometric/fixed function pipeline 231, which can be implemented by the graphics processor core 219, for example, in a lower performance and/or lower power graphics processor implementation. Shared by all sub-cores. In various embodiments, the geometric/fixed function pipeline 231 includes a 3D fixed function pipeline (for example, as described below in FIG. 3A and 3D pipeline 312 in FIG. 4), a video front-end unit, a thread generator and a thread dispatcher, and A unified return buffer manager, which unified return buffer (eg, unified return buffer 418 in FIG. 4 as described below).In one embodiment, the fixed function block 230 further includes a graphics SoC interface 232, a graphics microcontroller 233, and a media pipeline 234. The graphics SoC interface 232 provides an interface between the graphics processor core 219 and other processor cores in the system-on-chip integrated circuit. The graphics microcontroller 233 is a programmable sub-processor that can be configured to manage various functions of the graphics processor core 219, including thread dispatching, scheduling, and preemption. The media pipeline 234 (e.g., the media pipeline 316 of FIGS. 3A and 4) includes logic to facilitate decoding, encoding, preprocessing, and/or postprocessing of multimedia data including image data and video data. The media pipeline 234 implements media operations through requests for calculation or sampling logic in the sub-cores 221A-221F.In one embodiment, the SoC interface 232 enables the graphics processor core 219 to communicate with a general-purpose application processor core (eg, CPU) and/or other components within the SoC, including other components such as shared last-level cache memory Memory-level structural elements, system RAM, and/or DRAM on embedded chips or packages. The SoC interface 232 can also enable communication with fixed-function devices such as camera imaging pipelines within the SoC, and enable the use of global memory atomicity and/or realize global memory atomicity, which can be implemented in the graphics processor core 219 It is shared with the CPU in the SoC. The SoC interface 232 can also implement power management control for the graphics processor core 219 and enable the interface between the clock domain of the graphics processor core 219 and other clock domains in the SoC. In one embodiment, the SoC interface 232 enables receiving command buffers from the command streamer and the global thread dispatcher, which are configured to provide commands and instructions to the graphics processor Each of the one or more graphics cores. When the media operation is about to be executed, these commands and instructions can be assigned to the media pipeline 234, or when the graphics processing operation is about to be executed, these commands and instructions can be assigned to the geometric and fixed-function pipelines (for example, the geometric and fixed-function pipelines 231). , Geometry and fixed function pipeline 237).The graphics microcontroller 233 may be configured to perform various scheduling tasks and management tasks for the graphics processor core 219. In one embodiment, the graphics microcontroller 233 may perform graphics and/or computing workload scheduling on each graphics parallel engine in the execution unit (EU) arrays 222A-222F, 224A-224F in the sub-cores 221A-221F. In this scheduling model, the host software executed on the CPU core of the SoC including the graphics processor core 219 can submit the workload via one of the multiple graphics processor doorbells (doorbell), which calls the Dispatch operation of the appropriate graphics engine. Scheduling operations include: determining which workload to run next, submitting the workload to the command stream converter, preempting the existing workload running on the engine, monitoring the progress of the workload, and notifying the host software when the workload is complete. In one embodiment, the graphics microcontroller 233 can also facilitate a low power or idle state of the graphics processor core 219, thereby providing the graphics processor core 219 with an operating system and/or graphics driver software that is independent of the operating system and/or the graphics driver software on the system across low power. State transition to save and restore the ability of the registers in the graphics processor core 219.The graphics processor core 219 may have more or less than the illustrated sub-cores 221A-221F, up to N modular sub-cores. For each group of N sub-cores, the graphics processor core 219 may also include shared function logic 235, shared and/or cache memory 236, geometric/fixed function pipeline 237, and additional additional functions for accelerating various graphics and computing processing operations. Fixed function logic 238. The shared function logic 235 may include the shared function logic 420 (eg, sampler logic, mathematical logic, and/or inter-thread communication logic) that can be shared by every N sub-cores in the graphics processor core 219 and is associated with the shared function logic 420 of FIG. 4 The logical unit. The shared and/or cache memory 236 may be the last level cache for the set of N sub-cores 221A-221F within the graphics processor core 219, and may also serve as a shared memory accessible by multiple sub-cores. The geometric/fixed function pipeline 237 instead of the geometric/fixed function pipeline 231 may be included in the fixed function block 230, and the geometric/fixed function pipeline 237 may include the same or similar logical units.In one embodiment, the graphics processor core 219 includes additional fixed function logic 238, which may include various fixed function acceleration logic for use by the graphics processor core 219. In one embodiment, the additional fixed function logic 238 includes additional geometric pipelines for use in position-only shading. In position-only coloring, there are two geometric pipelines: the full geometric pipeline within the geometry/fixed function pipeline 231; and the culling pipeline, which is an additional geometric pipeline that can be included in the additional fixed function logic 238. In one embodiment, the culling pipeline is a simplified version of the complete geometric pipeline. The full pipeline and the culling pipeline can execute different instances of the same application, and each instance has a separate context. Only positional shading can hide the long culling run of the discarded triangles, thereby enabling the shading to be completed earlier in some instances. For example and in one embodiment, the culling pipeline logic within the additional fixed function logic 238 can execute the position shader in parallel with the main application, and generally generate key results faster than a full pipeline, because the culling pipeline only fetches the position of the vertices The attributes and the position attributes of the vertices are colored without performing rasterization and rendering of the pixels to the frame buffer. The culling pipeline can use the generated key results to calculate the visibility information of all triangles, regardless of whether those triangles are culled. A full pipeline (which can be called a replay pipeline in this example) can consume this visibility information to skip the culled triangles, thereby coloring only the visible triangles that are finally passed to the rasterization stage .In one embodiment, the additional fixed function logic 238 may also include machine learning acceleration logic, such as fixed function matrix multiplication logic, which is used to include optimized implementations for machine learning training or inference.Each graphics sub-core 221A-221F includes a collection of execution resources that can be used to perform graphics operations, media operations, and computing operations in response to requests made by graphics pipelines, media pipelines, or shader programs. The graphics sub-core 221A-221F includes: multiple EU arrays 222A-222F, 224A-224F; thread dispatching and inter-thread communication (TD/IC) logic 223A-223F; 3D (for example, texture) samplers 225A-225F; media sampling 206A-206F; shader processors 227A-227F; and shared local memory (SLM) 228A-228F. The EU arrays 222A-222F and 224A-224F each include multiple execution units that can perform floating-point and integer/fixed-point logic operations to serve graphics operations, media operations, or computing operations (including graphics programs, media programs, or computing Shader/GPGPU program) general graphics processing unit. The TD/IC logic 223A-223F performs local thread dispatching and thread control operations for the execution unit within the sub-core, and facilitates communication between threads executing on the execution unit of the sub-core. The 3D samplers 225A-225F can read texture or other 3D graphics-related data into the memory. The 3D sampler can read texture data in different ways based on the configured sample state and the texture format associated with a given texture. The media samplers 206A-206F can perform similar read operations based on the type and format associated with the media data. In one embodiment, each graphics sub-core 221A-221F may alternately include unified 3D and media samplers. The threads executing on the execution unit in each of the sub-cores 221A-221F can use the shared local memory 228A-228F in each sub-core, so that the threads executing in the thread group can use the on-chip memory Public pool to perform.Figure 2C illustrates a graphics processing unit (GPU) 239 that includes a collection of dedicated graphics processing resources arranged as a multi-core group 240A-240N. The details of the multi-core group 240A are shown. The multi-core groups 240B-240N can be equipped with a collection of the same or similar graphics processing resources.As illustrated, the multi-core group 240A may include a collection of graphics cores 243, a collection of tensor cores 244, and a collection of ray tracing cores 245. The scheduler/dispatcher 241 schedules and dispatches graphics threads for execution on the respective cores 243, 244, 245. In one embodiment, the tensor kernel 244 is a sparse tensor kernel that has hardware for enabling multiplication operations with zero-valued inputs to be bypassed.The set of register files 242 can store operand values used by the cores 243, 244, and 245 when executing graphics threads. These register files may include, for example, integer registers for storing integer values, floating-point registers for storing floating-point values, vector registers for storing compressed data elements (integer and/or floating-point data elements), and The slice register of the quantity/matrix value. In one embodiment, the slice registers are implemented as a collection of combined vector registers.One or more combined first level (L1) cache and shared memory unit 247 locally stores graphics data in each multi-core group 240A, graphics data such as texture data, vertex data, pixel data, light data, bounding volume Data etc. One or more texture units 247 may also be used to perform texture operations, such as texture mapping and sampling. The second level (L2) cache 253 shared by all multi-core groups 240A-240N or a subset of multi-core groups 240A-240N stores graphics data and/or instructions for multiple concurrent graphics threads. As illustrated, the L2 cache 253 may be shared across multiple multi-core groups 240A-240N. One or more memory controllers 248 couple GPU 239 to memory 249, which may be system memory (e.g., DRAM) and/or dedicated graphics memory (e.g., GDDR6 memory).The input/output (I/O) circuit 250 couples the GPU 239 to one or more I/O devices 252, such as a digital signal processor (DSP), network controller, or user input equipment. On-chip interconnects can be used to couple I/O device 252 to GPU 239 and memory 249. One or more I/O memory management units (IOMMU) 251 of the I/O circuit 250 directly couple the I/O device 252 to the memory 249. In one embodiment, the IOMMU 251 manages a page table used to map virtual addresses to multiple sets of physical addresses in the memory 249. In this embodiment, the I/O device 252, CPU(s) 246, and GPU 239 may share the same virtual address space.In one implementation, IOMMU 251 supports virtualization. In this case, the IOMMU 251 can manage the page table used to map the guest/graphic virtual address to the first set of guest/graphic physical addresses and the page table used to map the guest/graphic physical address to (for example, the memory 249 ) The page table of the second set of system/host physical addresses. The base address of each of the page table of the first set and the page table of the second set can be stored in the control register and swapped out when the context is switched (for example, so that the new context is provided with the page table of the relevant set Table access). Although not shown in FIG. 2C, each of the cores 243, 244, 245 and/or the multi-core groups 240A-240N may include a translation lookaside buffer (TLB), which is used for virtual guest to guest physical conversion, Guest physical to host physical conversion and guest virtual to host physical conversion are cached.In one embodiment, the CPU 246, GPU 239, and I/O device 252 are integrated on a single semiconductor chip and/or chip package. The memory 249 may be integrated on the same chip or may be coupled to the memory controller 248 via an off-chip interface. In one implementation, the memory 249 includes GDDR6 memory that shares the same virtual address space as other physical system-level memories, but the fundamental principle of the present invention is not limited to this specific implementation.In one embodiment, the tensor core 244 includes multiple execution units specifically designed to perform matrix operations, which are basic calculation operations for performing deep learning operations. For example, synchronous matrix multiplication operations can be used for neural network training and inference. The tensor core 244 can use various operand precisions to perform matrix processing. The operand precision includes single-precision floating point (for example, 32-bit), half-precision floating-point (for example, 16-bit), integer (16-bit), word Section (8 bits) and Nibble (4 bits). In one embodiment, the neural network implementation extracts the features of each rendered scene, thereby potentially combining details from multiple frames to construct a high-quality final image.In a deep learning implementation, parallel matrix multiplication work can be scheduled for execution on the tensor core 244. In particular, the training of neural networks requires a large number of matrix dot product operations. In order to process the inner product formulation of the N x N x N matrix multiplication, the tensor kernel 244 may include at least N dot product processing elements. Before the matrix multiplication starts, a complete matrix is loaded into the slice register, and for each of the N cycles, at least one column of the second matrix is loaded. For each cycle, there are N dot products that are processed.Depending on the specific implementation, the matrix elements can be stored with different precisions, including 16-bit words, 8-bit bytes (for example, INT8), and 4-bit nibbles (for example, INT4). Different precision modes can be specified for the tensor core 244 to ensure that the most efficient precision is used for different workloads (for example, such as inferred workloads, which can tolerate quantization to bytes and nibbles).In one embodiment, the ray tracing core 245 accelerates ray tracing operations for both real-time ray tracing implementations and non-real-time ray tracing implementations. Specifically, the ray tracing core 245 includes a ray traversal/intersection circuit, which is used to use a bounding volume hierarchy (BVH) to perform ray traversal and identify the ray enclosed in the BVH container and the primitive. Intersect. The ray tracing core 245 may also include circuitry for performing depth testing and culling (for example, using a Z buffer or similar arrangement). In one implementation, the ray tracing kernel 245 performs traversal and intersection operations consistent with the image noise reduction technique described herein, and at least part of the image noise reduction technique may be performed on the tensor kernel 244. For example, in one embodiment, the tensor core 244 implements a deep learning neural network to perform noise reduction on the frames generated by the ray tracing core 245. However, the CPU 246, graphics core 243, and/or ray tracing core 245 may also implement all or part of the noise reduction and/or deep learning algorithm.In addition, as described above, a distributed method for noise reduction may be adopted. In the distributed method, the GPU 239 is in a computing device coupled to other computing devices through a network or high-speed interconnection. In this embodiment, the interconnected computing devices share neural network learning/training data to improve the speed at which the entire system learns to perform noise reduction for different types of image frames and/or different graphics applications.In one embodiment, the ray tracing core 245 handles all BVH traversals and ray-primitive intersections, thereby saving the graphics core 243 from being overloaded by thousands of instructions for each ray. In one embodiment, each ray tracing core 245 includes a professional circuit for performing a first set of bounding box tests (e.g., for traversal operations) and a first set of professional circuits for performing ray-triangle intersection tests (e.g., making it traversed) The light rays intersect) the second set of professional circuits. Thus, in one embodiment, the multi-core group 240A can simply initiate ray detection, and the ray tracing core 245 independently performs ray traversal and intersection, and returns hit data (for example, hits, no hits, multiple hits, etc.) To the thread context. When the ray tracing core 245 performs traversal and intersection operations, the other cores 243, 244 are released to perform other graphics or calculation tasks.In one embodiment, each ray tracing core 245 includes a traversal unit for performing a BVH test operation and an intersection unit for performing a ray-primitive intersection test. The intersection unit generates "hit," "no hit," or "multiple hit" responses, and the intersection unit provides these responses to the appropriate thread. During the traversal and intersection operations, execution resources of other cores (eg, graphics core 243 and tensor core 244) are released to perform other forms of graphics work.In a specific embodiment described below, a hybrid rasterization/ray tracing method in which work is distributed between the graphics core 243 and the ray tracing core 245 is used.In one embodiment, the ray tracing core 245 (and/or other cores 243, 244) includes hardware support for the ray tracing instruction set, such as Microsoft's DirectX ray tracing (DXR), which includes the DispatchRays command; As well as the ray generation shader, recent hit shader, any hit shader and miss shader, they enable the assignment of a unique set of shaders and textures to each object. Another ray tracing platform that can be supported by the ray tracing core 245, graphics core 243, and tensor core 244 is Vulkan 1.1.85. However, it should be noted that the fundamental principle of the present invention is not limited to any specific ray tracing instruction set architecture ISA.In general, each core 245, 244, 243 can support a ray tracing instruction set including instructions/functions for the following: ray generation, recent hit, any hit, ray-primitive intersection, primitive by primitive, and hierarchical structure Bounding box construction, misses, visits, and exceptions. More specifically, one embodiment includes ray tracing instructions for performing the following functions:Light generation-light generation instructions can be executed for each pixel, sample or other user-defined assignment.Last hit —— The last hit command can be executed to locate the closest intersection point between the light and the primitive in the scene.Any hit-Any hit instruction identifies multiple intersections between rays and primitives in the scene, thereby potentially identifying the new closest intersection.Intersect-The intersect command executes the ray-primitive intersection test and outputs the result.Primitive Bounding Box Construction-This instruction builds a bounding box around a given primitive or group of primitives (for example, when building a new BVH or other accelerated data structure).Miss-Indicates that the light missed the scene or any geometry in the specified area of the scene.Visit-Indicate the sub-container that the light will traverse.Exceptions-including various types of exception handlers (for example, called for various error conditions).In one embodiment, the ray tracing core 245 may be adapted to accelerate general calculation operations, which may be accelerated using calculation techniques similar to the ray intersection test. A calculation framework may be provided that enables shader programs to be compiled into low-level instructions and/or primitives that perform general calculation operations via ray tracing cores. Exemplary calculation problems that can benefit from the calculation operations performed on the ray tracing core 245 include calculations involving the propagation of light beams, waves, rays, or particles in a coordinate space. The interaction associated with that propagation can be calculated relative to the geometry or grid in the coordinate space. For example, calculations associated with the propagation of electromagnetic signals through the environment can be accelerated through the use of instructions or primitives that are executed via a ray tracing core. The diffraction and reflection of the signal passing through objects in the environment can be calculated as a direct ray tracing simulation.The ray tracing kernel 245 can also be used to perform calculations that are not directly similar to ray tracing. For example, the ray tracing kernel 245 can be used to accelerate grid projection, grid refinement, and volume sampling calculations. It is also possible to perform general coordinate space calculations, such as nearest neighbor calculations. For example, by defining a bounding box around a given point in the coordinate space, a collection of points near the point can be found. The BVH and ray detection logic in the ray tracing core 245 can then be used to determine the intersection of the set of points within the bounding box. The intersection constitutes the origin and the nearest neighbor to that origin. The calculation performed using the ray tracing core 245 may be performed in parallel with the calculation performed on the graphics core 243 and the tensor core 244. The shader compiler may be configured to compile a computing shader or other general graphics processing program into low-level primitives that can be parallelized across the graphics core 243, the tensor core 244, and the ray tracing core 245.FIG. 2D is a block diagram of a general graphics processing unit (GPGPU) 270 according to embodiments described herein. The GPGPU 270 may be configured as a graphics processor and/or a computing accelerator. The GPGPU 270 may be interconnected with a host processor (eg, one or more CPUs 246) and memories 271, 272 via one or more system and/or memory buses. In one embodiment, the memory 271 is a system memory that can be shared with one or more CPUs 246, and the memory 272 is a device memory dedicated to the GPGPU 270. In one embodiment, the components within the GPGPU 270 and memory 272 may be mapped to memory addresses accessible by one or more CPUs 246. Access to the memories 271 and 272 can be facilitated via the memory controller 268. In one embodiment, the memory controller 268 includes an internal direct memory access (DMA) controller 269, or may include logic to perform operations that would otherwise be performed by the DMA controller.The GPGPU 270 includes a plurality of cache memories, including an L2 cache 253, an L1 cache 254, an instruction cache 255, and a shared memory 256. At least part of the shared memory 256 may also be partitioned as a cache memory. The GPGPU 270 also includes a plurality of computing units 260A-260N. Each calculation unit 260A-260N includes a collection of vector registers 261, a collection of scalar registers 262, a collection of vector logic units 263, and a collection of scalar logic units 264. The calculation units 260A-260N may also include a local shared memory 265 and a program counter 266. The calculation units 260A-260N may be coupled with a constant cache 267, which may be used to store constant data, which is data that will not change during the operation of the core program or shader program executed on the GPGPU 270. In one embodiment, the constant cache 267 is a scalar data cache, and the cached data can be directly fetched into the scalar register 262.During operation, one or more CPUs 246 may write commands into registers in GPGPU 270 or into memory in GPGPU 270 that has been mapped to an accessible address space. The command processor 257 can read commands from registers or memory and determine how those commands will be processed within the GPGPU 270. Thread dispatcher 258 may then be used to dispatch threads to computing units 260A-260N to execute those commands. Each computing unit 260A-260N can execute threads independently of other computing units. In addition, each calculation unit 260A-260N may be independently configured for conditional calculation, and may conditionally output the calculation result to the memory. When the submitted command is completed, the command processor 257 may interrupt one or more CPUs 246.Figures 3A-3C illustrate block diagrams of additional graphics processor and computing accelerator architectures provided by the embodiments described herein. Those elements of FIGS. 3A-3C that have the same reference numerals (or names) as the elements of any other figures in this document can operate or function in any manner similar to those described elsewhere in this document, but are not limited to this .3A is a block diagram of a graphics processor 300. The graphics processor 300 may be a discrete graphics processing unit, or may be a graphics processor integrated with multiple processing cores or other semiconductor devices, such as but not limited to memory devices. Or network interface. In some embodiments, the graphics processor communicates via a memory-mapped I/O interface to registers on the graphics processor and using commands placed in the processor memory. In some embodiments, the graphics processor 300 includes a memory interface 314 for accessing memory. The memory interface 314 may be an interface to a local memory, one or more internal caches, one or more shared external caches, and/or system memory.In some embodiments, the graphics processor 300 further includes a display controller 302 for driving the display output data to the display device 318. The display controller 302 includes hardware for the synthesis of one or more overlay planes of the display and multiple layers of video or user interface elements. The display device 318 may be an internal or external display device. In one embodiment, the display device 318 is a head-mounted display device, such as a virtual reality (VR) display device or an augmented reality (AR) display device. In some embodiments, the graphics processor 300 includes methods for encoding media into one or more media encoding formats, decoding media from one or more media encoding formats, or in one or more media encoding formats. A video codec engine 306 that transcodes media in time. The one or more media encoding formats include but are not limited to: Moving Picture Experts Group (MPEG) format (such as MPEG-2), Advanced Video Coding (AVC) Formats (such as H.264/MPEG-4AVC, H.265/HEVC), Open Media Alliance (AOMedia) VP8, VP9, AV1, and Society of Film and Television Engineers (SMPTE) 421M/VC-1, and Joint Image Expert Group (JPEG) format (such as JPEG, and Motion JPEG (MJPEG) format).In some embodiments, the graphics processor 300 includes a block image transfer (BLIT) engine 304 for performing two-dimensional (2D) rasterizer operations, including, for example, bit boundary block transfer. However, in one embodiment, one or more components of the graphics processing engine (GPE) 310 are used to perform 2D graphics operations. In some embodiments, GPE 310 is a computing engine for performing graphics operations, including three-dimensional (3D) graphics operations and media operations.In some embodiments, the GPE 310 includes a 3D pipeline 312 for performing 3D operations, such as using processing functions that act on 3D primitive shapes (eg, rectangles, triangles, etc.) to render three-dimensional images and scenes. The 3D pipeline 312 includes programmable and fixed function elements that execute various tasks within the elements of the 3D/media subsystem 315 and/or the generated execution threads. Although the 3D pipeline 312 can be used to perform media operations, the embodiment of the GPE 310 also includes a media pipeline 316 that is specifically used to perform media operations, such as video post-processing and image enhancement.In some embodiments, the media pipeline 316 includes fixed-function or programmable logic units to replace or represent the video codec engine 306 to perform one or more professional media operations, such as video decoding acceleration, video de-interlacing, And video encoding acceleration. In some embodiments, the media pipeline 316 additionally includes a thread generation unit to generate threads for execution on the 3D/media subsystem 315. The generated threads perform calculations on media operations on one or more graphics execution units included in the 3D/media subsystem 315.In some embodiments, the 3D/media subsystem 315 includes logic for executing threads generated by the 3D pipeline 312 and the media pipeline 316. In one embodiment, the pipeline sends a thread execution request to the 3D/media subsystem 315, which includes thread dispatch logic for arbitrating and dispatching various requests for available thread execution resources. Execution resources include an array of graphics execution units for processing 3D threads and media threads. In some embodiments, the 3D/media subsystem 315 includes one or more internal caches for thread instructions and data. In some embodiments, the subsystem also includes a shared memory for sharing data between threads and for storing output data, which includes registers and addressable memory.FIG. 3B illustrates a graphics processor 320 with a sharding architecture according to embodiments described herein. In one embodiment, the graphics processor 320 includes a graphics processing engine cluster 322 that has multiple instances of the graphics processor engine 310 in FIG. 3A in the graphics engine slices 310A-310D. Each graphics engine slice 310A-310D may be interconnected via a set of slice interconnections 323A-323F. Each graphics engine slice 310A-310D may also be connected to a memory module or memory device 326A-326D via a memory interconnect 325A-325D. The memory devices 326A-326D can use any graphics memory technology. For example, the memory devices 326A-326D may be graphics double data rate (GDDR) memory. In one embodiment, the memory devices 326A-326D are high-bandwidth memory (HBM) modules, and these high-bandwidth memory (HBM) modules can be on the die together with their corresponding graphics engine chips 310A-310D. In one embodiment, the memory devices 326A-326D are stacked memory devices that can be stacked on top of their corresponding graphics engine slices 310A-310D. In one embodiment, each graphics engine chip 310A-310D and associated memory 326A-326D reside on separate chiplets, which are bonded to a base die or base substrate, as in This is described in further detail in Figures 11B-11D.The graphics processor 320 may be configured with a non-uniform memory access (NUMA) system in which memory devices 326A-326D are coupled with associated graphics engine chips 310A-310D. A given memory device can be accessed by a graphics engine slice different from the graphics engine slice to which the memory device is directly connected. However, when accessing local slices, the access latency to the memory devices 326A-326D can be the lowest. In one embodiment, a cache-coherent NUMA (ccNUMA) system is enabled. The ccNUMA system uses chip interconnects 323A-323F to enable communication between the cache controllers in the graphics engine chips 310A-310D for when There is more than one cache that maintains a consistent memory image when storing the same memory location.The graphics processing engine cluster 322 may be connected to the on-chip or on-package structural interconnect 324. In one embodiment, the structural interconnect 324 includes a network processor, a network on chip (NoC), or a packet-switched structural interconnect for enabling the structural interconnect 324 to exchange data packets between components of the graphics processor 320. Connected to another switch processor. The structural interconnect 324 may enable communication between the graphics engine slices 310A-310D and components such as the video codec engine 306 and one or more copy engines 304. The copy engine 304 can be used to move data out of the memory devices 326A-326D and memory external to the graphics processor 320 (e.g., system memory), and move data into the memory devices 326A-326D and memory external to the graphics processor 320 (e.g., system memory). Memory), and move data between the memory devices 326A-326D and a memory external to the graphics processor 320 (e.g., system memory). The structural interconnect 324 may also be coupled with one or more of the slice interconnects 323A-323F to facilitate or enhance the interconnection between the graphics engine slices 310A-310D. The structural interconnect 324 may also be configured to interconnect multiple instances of the graphics processor 320 (for example, via the host interface 328), thereby enabling chip-to-chip communication between the graphics engine chips 310A-310D of multiple GPUs. In one embodiment, the graphics engine slices 310A-310D of multiple GPUs may be presented to the host system as a single logical device.The graphics processor 320 may optionally include a display controller 302 to enable connection with the display device 318. The graphics processor can also be configured as a graphics accelerator or a computing accelerator. In the accelerator configuration, the display controller 302 and the display device 318 may be omitted.The graphics processor 320 may be connected to a host system via a host interface 328. The host interface 328 may enable communication between the graphics processor 320, system memory, and/or system components. The host interface 328 may be, for example, a PCI express bus or another type of host system interface. For example, the host interface 328 may be an NVLink or NVSwitch interface. The host interface 328 and the structural interconnect 324 may cooperate to enable multiple instances of the graphics processor 320 to act as a single logical device. The cooperation between the host interface 328 and the structural interconnect 324 can also enable the various graphics engine slices 310A-310D to be presented to the host system as different logical graphics devices.Figure 3C illustrates a computing accelerator 330 according to embodiments described herein. The computing accelerator 330 may include an architectural similarity to the graphics processor 320 in FIG. 3B and is optimized for computing acceleration. The computing engine cluster 332 may include a collection of computing engine slices 340A-340D, and the collection of computing engine slices 340A-340D includes execution logic optimized for parallel or vector-based general computing operations. In some embodiments, the calculation engine slices 340A-340D do not include fixed-function graphics processing logic, but in one embodiment, one or more of the calculation engine slices 340A-340D may include logic for performing media acceleration. Compute engine slices 340A-340D may be connected to memories 326A-326D via memory interconnects 325A-325D. The memories 326A-326D and the memory interconnects 325A-325D may be similar technologies as in the graphics processor 320, or may be different technologies. The graphics computing engine slices 340A-340D may also be interconnected via the set of slice interconnections 323A-323F, and may be connected to and/or interconnected by the structural interconnection 324. Cross-slice communication can be facilitated via structural interconnect 324. The structural interconnect 324 (e.g., via the host interface 328) may also facilitate communication between the computing engine slices 340A-340D of multiple instances of the computing accelerator 330. In one embodiment, the computing accelerator 330 includes a large L3 cache 336 that can be configured as a device-wide cache. The computing accelerator 330 can also be connected to a host processor and memory via a host interface 328 in a similar manner to the graphics processor 320 in FIG. 3B.The computing accelerator 330 may also include an integrated network interface 342. In one embodiment, the network interface 342 includes a network processor and controller logic that enables the computing engine cluster 332 to communicate through the physical layer interconnect 344 without the need for data to span the memory of the host system. In one embodiment, one of the computing engine slices 340A-340D is replaced by network processor logic, and the data to be transmitted or received via the physical layer interconnect 344 may be directly transmitted to or from the memories 326A-326D. Multiple instances of the computing accelerator 330 may be combined into a single logical device via the physical layer interconnect 344. Alternatively, each computing engine slice 340A-340D may be presented as a different network-accessible computing accelerator device.Graphics engineFIG. 4 is a block diagram of a graphics processing engine 410 of a graphics processor according to some embodiments. In one embodiment, the graphics processing engine (GPE) 410 is a certain version of the GPE 310 shown in FIG. 3A, and may also represent the graphics engine slices 310A-310D in FIG. 3B. Those elements of FIG. 4 having the same reference numerals (or names) as the elements of any other figures in this document can operate or function in any manner similar to those described elsewhere in this document, but are not limited thereto. For example, the figure shows the 3D pipeline 312 and the media pipeline 316 of FIG. 3A. The media pipeline 316 is optional in some embodiments of the GPE 410, and may not be explicitly included in the GPE 410. For example and in at least one embodiment, a separate media and/or image processor is coupled to GPE 410.In some embodiments, the GPE 410 is coupled to the command streamer 403 or includes a command streamer 403 that provides the command stream to the 3D pipeline 312 and/or the media pipeline 316. Alternatively or additionally, the command streamer 403 may be directly coupled to the unified return buffer 418. The unified return buffer 418 is communicatively coupled to the graphics core array 414. In some embodiments, the command streamer 403 is coupled with a memory, and the memory may be one or more of a system memory, an internal cache memory, and a shared cache memory. In some embodiments, the command streamer 403 receives commands from the memory and sends these commands to the 3D pipeline 312 and/or the media pipeline 316. These commands are instructions fetched from the ring buffer, which stores commands for the 3D pipeline 312 and the media pipeline 316. In one embodiment, the ring buffer may additionally include a batch command buffer storing a batch of multiple commands. The commands for the 3D pipeline 312 may also include references to data stored in memory, such as but not limited to vertex data and geometric data for the 3D pipeline 312 and/or image data and memory for the media pipeline 316 Object. The 3D pipeline 312 and the media pipeline 316 process commands and data by executing operations via logic within the respective pipelines or by dispatching one or more execution threads to the graphics core array 414. In one embodiment, the graphics core array 414 includes blocks of one or more graphics cores (eg, graphics core(s) 415A, graphics core(s) 415B), and each block includes one or more graphics cores. Each graphics core includes a collection of graphics execution resources. The collection of graphics execution resources includes: general execution logic and dedicated execution logic for graphics operations and calculation operations; and fixed-function texture processing logic and/or machine learning and manual operations. Intelligent acceleration logic.In various embodiments, the 3D pipeline 312 may include fixed functions and programmable logic for processing one or more shader programs by processing instructions and dispatching execution threads to the graphics core array 414, the one or more shaders Programs such as vertex shaders, geometry shaders, pixel shaders, fragment shaders, computational shaders, or other shaders and/or GPGPU programs. The graphics core array 414 provides a unified execution resource block for use in processing these shader programs. The multi-function execution logic (for example, execution unit) in the graphics core(s) 415A-415B of the graphics core array 414 includes support for various 3D API shader languages, and can execute multiple shaders associated with multiple shaders. Synchronous execution threads.In some embodiments, the graphics core array 414 includes execution logic for performing media functions such as video and/or image processing. In one embodiment, the execution unit includes general logic that is programmable to perform parallel general computing operations in addition to graphics processing operations. The general logic may perform processing operations in parallel or in combination with general logic in the processor core(s) 107 of FIG. 1 or the cores 202A-202N in FIG. 2A.The output data generated by the threads executing on the graphics core array 414 may output the data to the memory in the unified return buffer (URB) 418. URB 418 can store data for multiple threads. In some embodiments, URB418 can be used to send data between different threads executing on graphics core array 414. In some embodiments, URB 418 may additionally be used for synchronization between threads on the graphics core array and fixed function logic within shared function logic 420.In some embodiments, the graphics core array 414 is scalable such that the array includes a variable number of graphics cores, each graphics core having a variable number of execution units based on the target power and performance level of the GPE 410. In one embodiment, execution resources are dynamically scalable, so that execution resources can be enabled or disabled as needed.The graphics core array 414 is coupled with a shared function logic 420, which includes multiple resources that are shared among the graphics cores in the graphics core array. The shared function in the shared function logic 420 is a hardware logic unit that provides professional supplementary functions to the graphics core array 414. In various embodiments, the shared function logic 420 includes, but is not limited to, sampler logic 421, mathematical logic 422, and inter-thread communication (ITC) logic 423. Additionally, some embodiments implement one or more caches 425 within the shared function logic 420.At least in the case where the requirements for a given professional function are insufficient to be included in the graphics core array 414, the shared function is realized. Instead, a single instantiation of that specialized function is implemented as an independent entity in the shared function logic 420 and is shared among execution resources within the graphics core array 414. The exact set of functions shared between and included in the graphics core array 414 varies from embodiment to embodiment. In some embodiments, specific shared functions in the shared function logic 420 that are widely used by the graphics core array 414 may be included in the shared function logic 416 in the graphics core array 414. In various embodiments, the shared function logic 416 in the graphics core array 414 may include some or all of the logic in the shared function logic 420. In one embodiment, all logic elements in the shared function logic 420 can be copied in the shared function logic 416 of the graphics core array 414. In one embodiment, the shared function logic 420 is excluded in favor of the shared function logic 416 within the graphics core array 414.Execution unit5A-5B illustrate thread execution logic 500 according to embodiments described herein, which thread execution logic 500 includes an array of processing elements employed in a graphics processor core. Those elements of FIGS. 5A-5B that have the same reference numerals (or names) as the elements of any other drawings in this document can operate or function in any manner similar to those described elsewhere in this document, but are not limited thereto . 5A-5B illustrate an overview of the thread execution logic 500, which may represent the hardware logic illustrated in each of the sub-cores 221A-221F in FIG. 2B. FIG. 5A shows the execution unit in a general graphics processor, and FIG. 5B shows the execution unit that can be used in a computing accelerator.As illustrated in FIG. 5A, in some embodiments, the thread execution logic 500 includes a shader processor 502, a thread dispatcher 504, an instruction cache 506, a scalable execution unit including a plurality of graphics execution units 508A-508N Array, sampler 510, shared local memory 511, data cache 512, and data port 514. In one embodiment, the scalable execution unit array can enable or disable one or more execution units (e.g., graphics execution units 508A, 508B, 508C, 508D, up to 508N-1 and 508N) based on the computing requirements of the workload. Any of them) to dynamically zoom. In one embodiment, the included components are interconnected via an interconnection structure that is linked to each of the components. In some embodiments, the thread execution logic 500 includes passing one or more of the instruction cache 506, data port 514, sampler 510, and graphics execution units 508A-508N to memory (such as system memory or cache memory) One or more connections. In some embodiments, each execution unit (eg, 508A) is an independent programmable general-purpose computing unit capable of executing multiple synchronous hardware threads while processing multiple data elements in parallel for each thread. In various embodiments, the array of graphics execution units 508A-508N is scalable to include any number of individual execution units.In some embodiments, the graphics execution units 508A-508N are mainly used to execute shader programs. The shader processor 502 can process various shader programs, and can dispatch execution threads associated with the shader programs via the thread dispatcher 504. In one embodiment, the thread dispatcher includes logic for arbitrating thread-initiated requests from the graphics pipeline and the media pipeline and instantiating the requested thread on one or more execution units in the graphics execution units 508A-508N . For example, the geometry pipeline can dispatch vertex shaders, tessellation shaders, or geometry shaders to thread execution logic for processing. In some embodiments, the thread dispatcher 504 may also handle runtime thread generation requests from executing shader programs.In some embodiments, the graphics execution units 508A-508N support an instruction set that includes native support for many standard 3D graphics shader instructions, so that execution comes from a graphics library (eg, Direct 3D, OpenGL, Vulkan, etc.) with minimal conversion. Shader program. These execution units support vertex and geometry processing (e.g., vertex programs, geometry programs, vertex shaders), pixel processing (e.g., pixel shaders, fragment shaders), and general processing (e.g., calculations and media shaders). Each of the execution units 508A-508N is capable of multiple issue single instruction multiple data (SIMD) execution, and multi-threaded operations enable an efficient execution environment when faced with high latency memory accesses. Each hardware thread in each execution unit has a dedicated high-bandwidth register file and related independent thread state. For pipelines capable of integer operations, single-precision floating-point operations, and double-precision floating-point operations, capable of SIMD branching, capable of logical operations, capable of transcending operations, and capable of other mixed operations, the execution is multi-release for each clock of. While waiting for data from one of the memory or shared functions, the dependency logic within the graphics execution units 508A-508N puts the waiting thread to sleep until the requested data has returned. When the waiting thread is sleeping, hardware resources can be dedicated to processing other threads. For example, during a delay associated with a vertex shader operation, the execution unit may perform operations for a pixel shader, a fragment shader, or another type of shader program that includes a different vertex shader. The various embodiments may be applied to use execution using single instruction multithreading (SIMT), as an alternative to or in addition to a SIMD use case. References to SIMD cores or operations can also be applied to SIMT, or combined with SIMT and applied to SIMD.Each of the graphics execution units 508A-508N operates on an array of data elements. The number of data elements is the "execution size", or the number of channels used for instructions. The execution channel is a logical unit for the execution of data element access, masking, and flow control within instructions. The number of channels can be independent of the number of physical arithmetic logic units (ALU), floating point units (FPU), or other logic units (for example, tensor cores, ray tracing cores, etc.) for a particular graphics processor. In some embodiments, graphics execution units 508A-508N support integer and floating point data types.The execution unit instruction set includes SIMD instructions. Various data elements can be stored in registers as compressed data types, and the execution unit will process each element based on the data size of the element. For example, when operating on a 256-bit wide vector, the 256 bits of the vector are stored in a register, and the execution unit operates the vector into four separate 64-bit compressed data elements (quad-word (QW) size data elements), Eight individual 32-bit compressed data elements (double word (DW) size data elements), sixteen individual 16-bit compressed data elements (word (W) size data elements), or thirty-two individual 8-bit data elements Data element (byte (B) size data element). However, different vector widths and register sizes are possible.In one embodiment, one or more execution units may be combined into a fusion execution unit 509A-509N, which has a thread control logic (507A-507N) common to the fusion EU. Multiple EUs can be merged into the EU group. Each EU in the fused EU group can be configured to execute a separate SIMD hardware thread. The number of EUs in the fused EU group may vary according to the embodiment. In addition, various SIMD widths can be executed EU by EU, including but not limited to SIMD8, SIMD16, and SIMD32. Each fusion graphics execution unit 509A-509N includes at least two execution units. For example, the fusion execution unit 509A includes a first EU 508A, a second EU 508B, and a thread control logic 507A common to the first EU 508A and the second EU 508B. The thread control logic 507A controls the threads executed on the fusion graphics execution unit 509A, thereby allowing each EU in the fusion execution units 509A-509N to execute using a common instruction pointer register.One or more internal instruction caches (e.g., 506) are included in the thread execution logic 500 to cache thread instructions for execution units. In some embodiments, one or more data caches (e.g., 512) are included to cache thread data during thread execution. The thread executing on the execution logic 500 may also store data that is explicitly managed in the shared local memory 511. In some embodiments, the sampler 510 is included to provide texture samples for 3D operations and media samples for media operations. In some embodiments, the sampler 510 includes specialized texture or media sampling functions in order to process the texture data or media data during the sampling process before providing the sampling data to the execution unit.During execution, the graphics pipeline and the media pipeline send the thread initiation request to the thread execution logic 500 via the thread generation and dispatch logic. Once a set of geometric objects have been processed and rasterized into pixel data, the pixel processor logic (eg, pixel shader logic, fragment shader logic, etc.) in the shader processor 502 is called to further calculate the output information , And cause the result to be written to the output surface (for example, color buffer, depth buffer, stencil buffer, etc.). In some embodiments, the pixel shader or fragment shader calculates the value of each vertex attribute, and the value of each vertex attribute will be interpolated across the rasterized object. In some embodiments, the pixel processor logic within the shader processor 502 then executes the pixel shader program or fragment shader program supplied by the application programming interface (API). To execute the shader program, the shader processor 502 dispatches the threads to the execution unit (for example, 508A) via the thread dispatcher 504. In some embodiments, the shader processor 502 uses the texture sampling logic in the sampler 510 to access the texture data in the texture map stored in the memory. Arithmetic operations on texture data and input geometric data calculate pixel color data for each geometric segment, or discard one or more pixels without further processing.In some embodiments, the data port 514 provides a memory access mechanism for the thread execution logic 500 to output the processed data to the memory for further processing on the graphics processor output pipeline. In some embodiments, the data port 514 includes or is coupled to one or more cache memories (e.g., data cache 512) in order to cache data for memory access via the data port.In one embodiment, the execution logic 500 may further include a ray tracer 505 that can provide a ray tracing acceleration function. The ray tracer 505 may support a ray tracing instruction set, which includes instructions/functions for ray generation. The ray tracing instruction set may be similar to or different from the ray tracing instruction set supported by the ray tracing core 245 in FIG. 2C.FIG. 5B illustrates exemplary internal details of the execution unit 508 according to an embodiment. The graphics execution unit 508 may include an instruction fetch unit 537, a general register file array (GRF) 524, an architectural register file array (ARF) 526, a thread arbiter 522, a sending unit 530, a branch unit 532, and a SIMD floating-point unit (FPU). Set 534, and set 535 of dedicated integer SIMD ALUs in one embodiment. GRF 524 and ARF 526 include a collection of general register files and architectural register files associated with each synchronous hardware thread that can be active in graphics execution unit 508. In one embodiment, the per-thread architecture state is maintained in ARF 526, while data used during thread execution is stored in GRF 524. The execution status of each thread, including the instruction pointer for each thread, can be maintained in a thread-specific register in ARF 526.In one embodiment, the graphics execution unit 508 has an architecture that is a combination of simultaneous multithreading (SMT) and fine-grained interleaving multithreading (IMT). The architecture has a modular configuration that can be fine-tuned at design time based on the target number of synchronization threads and the number of registers per execution unit, in which execution unit resources are divided across the logic used to execute multiple synchronization threads . The number of logical threads that can be executed by the graphics execution unit 508 is not limited to the number of hardware threads, and multiple logical threads may be assigned to each hardware thread.In one embodiment, the graphics execution unit 508 may cooperatively issue multiple instructions, and these instructions may be different instructions. The thread arbiter 522 of the graphics execution unit thread 508 may dispatch the instruction to one of the following for execution: the sending unit 530, the branch unit 532, or the SIMD FPU 534(s). Each execution thread can access 128 general-purpose registers in GRF524, where each register can store 32 bytes that can be accessed as a SIMD 8-element vector with 32-bit data elements. In one embodiment, each execution unit thread has access to 4 kilobytes in GRF 524, but the embodiment is not limited to this, and in other embodiments more or fewer register resources may be provided . In one embodiment, the graphics execution unit 508 is partitioned into seven hardware threads that can independently perform computing operations, but the number of threads in each execution unit may also vary according to the embodiment. For example, in one embodiment, up to 16 hardware threads are supported. In an embodiment where seven threads can access 4 kilobytes, GRF 524 can store a total of 28 kilobytes. In the case that 16 threads can access 4 kilobytes, GRF 524 can store a total of 64 kilobytes. The flexible addressing mode allows multiple registers to be addressed together, thereby creating actually wider registers or representing a stride rectangular block data structure.In one embodiment, memory operations, sampler operations, and other long-latency system communications are dispatched via a "send" instruction executed by the messaging sending unit 530. In one embodiment, branch instructions are dispatched to dedicated branch unit 532 to facilitate SIMD divergence and final convergence.In one embodiment, the graphics execution unit 508 includes one or more SIMD floating point units (FPU) 534 for performing floating point operations. In one embodiment, FPU(s) 534 also supports integer calculations. In one embodiment, the FPU(s) 534 can SIMD perform up to a number of M 32-bit floating point (or integer) operations, or SIMD can perform up to 2M 16-bit integer or 16-bit floating point operations. In one embodiment, at least one of the FPU(s) provides extended mathematical capabilities that support high-throughput transcendent mathematical functions and double-precision 64-bit floating point. In some embodiments, a set 535 of 8-bit integer SIMD ALUs also exists, and can be specifically optimized to perform operations associated with machine learning calculations.In one embodiment, an array of multiple instances of the graphics execution unit 508 may be instantiated in a graphics sub-core grouping (eg, sub-slice). For scalability, the product architect can choose the exact number of execution units per sub-core grouping. In one embodiment, the execution unit 508 may execute instructions across multiple execution channels. In a further embodiment, each thread executed on the graphics execution unit 508 is executed on a different channel.FIG. 6 illustrates an additional execution unit 600 according to an embodiment. The execution unit 600 may be an execution unit for calculation optimization used in, for example, the calculation engine slices 340A-340D in FIG. 3C, but is not limited thereto. Variations of execution unit 600 can also be used in graphics engine slices 310A-310D as shown in FIG. 3B. In one embodiment, the execution unit 600 includes a thread control unit 601, a thread state unit 602, an instruction fetch/prefetch unit 603, and an instruction decoding unit 604. The execution unit 600 additionally includes a register file 606 that stores registers that can be assigned to hardware threads within the execution unit. The execution unit 600 additionally includes a sending unit 607 and a branching unit 608. In one embodiment, the sending unit 607 and the branching unit 608 can operate in a similar manner to the sending unit 530 and the branching unit 532 of the graphics execution unit 508 in FIG. 5B.The execution unit 600 also includes a calculation unit 610, which includes a plurality of different types of functional units. The calculation unit 610 may include an ALU 611, a systolic array 612, and a math unit 613. The ALU 611 includes an array of arithmetic logic units. The ALU 611 can be configured to perform 64-bit, 32-bit, and 16-bit integer and floating point operations across multiple processing channels and data channels and for multiple hardware and/or software threads. The ALU 611 can perform integer and floating point operations at the same time (for example, in the same clock cycle).The systolic array 612 includes a wide W and deep D network of data processing units, which can be used to perform vector or other data parallel operations in a systolic manner. In one embodiment, the systolic array 612 may be configured to perform various matrix operations, including dot product, outer product, and general matrix-matrix multiplication (GEMM) operations. In one embodiment, the systolic array 612 supports 16-bit floating point operations as well as 8-bit, 4-bit, 2-bit, and binary integer operations. In addition to matrix multiplication operations, systolic array 612 can also be configured to accelerate certain machine learning operations. In such an embodiment, the systolic array 612 can be configured with support for the bfloat (brain floating point) 16-bit floating point format or the tensor floating point 32-bit floating point format (TF32), which are relative to the electrical and electronic The Institute of Engineers (IEEE) 754 format has different numbers of mantissa and exponent bits.The systolic array 612 includes hardware for accelerating sparse matrix operations. In one embodiment, multiplication operations for sparse regions of input data can be bypassed at the processing element level by skipping multiplication operations with zero-valued operands. In one embodiment, the sparsity within the input matrix can be detected, and operations with known output values can be bypassed before being submitted to the processing element of the systolic array 612. In addition, loading zero-valued operands into the processing element can be bypassed, and the processing element can be configured to perform multiplication on non-zero-valued input elements. The associated decompression or decoding metadata can be utilized to generate output in a compressed (e.g., dense) format. The output can be cached in a compressed format. When being written to local storage or host system storage, the output can be kept in a compressed format. The output can also be decompressed before being written to local storage or host system storage.In one embodiment, the systolic array 612 includes hardware for enabling operations on sparse data with a compressed representation. The compression of a sparse matrix means storing non-zero values and metadata that identifies the position of non-zero values within the matrix. Exemplary compressed representations include, but are not limited to, compressed tensor representations, such as compressed sparse row (CSR) representations, compressed sparse column (CSC) representations, and compressed sparse fiber (CSF) representations. Support for compressed representations enables operations to be performed on input in compressed tensor format without the need for compressed representations to be decompressed or decoded. In such embodiments, operations can be performed only on non-zero input values, and the resulting non-zero output values can be mapped into the output matrix. In some embodiments, hardware support for machine-specific lossless data compression formats is also provided. These machine-specific lossless data compression formats are used when transferring data within hardware or transferring data across system buses. Such data can be preserved in a compressed format for sparse input data, and the systolic array 612 can use compressed metadata for compressed data to enable operations to be performed only on non-zero values or to enable bypassing for multiplication operations. Block with zero data input.In one embodiment, the math unit 613 may be included to perform a specific subset of math operations in an efficient and lower power manner than the ALU 611. The mathematical unit 613 may include variants of mathematical logic that can be found in the shared function logic of the graphics processing engine provided by other embodiments (for example, the mathematical logic 422 of the shared function logic 420 in FIG. 4). In one embodiment, the math unit 613 may be configured to perform 32-bit and 64-bit floating point operations.The thread control unit 601 includes logic for controlling the execution of threads in the execution unit. The thread control unit 601 may include thread arbitration logic, which is used to start, stop, and preempt the execution of threads in the execution unit 600. The thread state unit 602 may be used to store the thread state of the thread assigned to be executed on the execution unit 600. Storing the thread state in the execution unit 600 can enable threads to be quickly preempted when they become locked or idle. The instruction fetch/prefetch unit 603 can fetch instructions from a higher-level execution logic instruction cache (for example, the instruction cache 506 in FIG. 5A). The instruction fetch/prefetch unit 603 also issues a prefetch request for instructions to be loaded into the execution cache based on the analysis of the current execution thread. The instruction decoding unit 604 may be used to decode instructions to be executed by the computing unit. In one embodiment, the instruction decoding unit 604 can be used as a secondary decoder to decode complex instructions into composed micro-operations.The execution unit 600 additionally includes a register file 606, which can be used by hardware threads executing on the execution unit 600. The registers in the register file 606 may be divided across logic used to execute multiple synchronization threads within the calculation unit 610 of the execution unit 600. The number of logical threads that can be executed by the graphics execution unit 600 is not limited to the number of hardware threads, and multiple logical threads may be assigned to each hardware thread. Based on the number of hardware threads supported, the size of the register file 606 may vary from embodiment to embodiment. In one embodiment, register renaming can be used to dynamically assign registers to hardware threads.Figure 7 is a block diagram illustrating a graphics processor instruction format 700 according to some embodiments. In one or more embodiments, the graphics processor execution unit supports an instruction set with instructions in multiple formats. The solid line box illustrates the component parts that are usually included in the execution unit instructions, while the dashed line includes the component parts that are optional or only included in a subset of the instructions. In some embodiments, the graphics processor instruction format 700 described and illustrated are macro instructions because they are instructions supplied to the execution unit, as opposed to micro instructions that are generated from instruction decoding once the instruction is processed. Therefore, a single instruction can cause the hardware to perform multiple micro-operations.In some embodiments, the graphics processor execution unit natively supports 128-bit instruction format 710 instructions. Based on the selected instruction, instruction options, and number of operands, the 64-bit compact instruction format 730 can be used for some instructions. The native 128-bit instruction format 710 provides access to all instruction options, while some options and operations are restricted in the 64-bit format 730. The native instructions available in the 64-bit format 730 vary from embodiment to embodiment. In some embodiments, the set of index values in the index field 713 is used to partially compress the instructions. The execution unit hardware references the set of compression tables based on the index value, and uses the compression table output to reconstruct the native instructions of the 128-bit instruction format 710. Other sizes and formats of the command can be used.For each format, the instruction opcode 712 defines the operation to be performed by the execution unit. The execution unit executes each instruction in parallel across multiple data elements of each operand. For example, in response to the addition instruction, the execution unit performs a synchronous addition operation across each color channel representing a texture element or a picture element. By default, the execution unit executes each instruction across all data channels of the operand. In some embodiments, the instruction control field 714 enables control of certain execution options, such as channel selection (e.g., assertion) and data channel order (e.g., mixing). For instructions in the 128-bit instruction format 710, the execution size field 716 limits the number of data channels that will be executed in parallel. In some embodiments, the execution size field 716 is not available for the 64-bit compact instruction format 730.Some execution unit instructions have up to three operands, including two source operands src0 720, src1 722, and a destination operand 718. In some embodiments, the execution unit supports dual destination instructions, where one of the dual destinations is implicit. The data manipulation instruction may have a third source operand (eg, SRC2 724), where the instruction opcode 712 determines the number of source operands. The last source operand of the instruction can be an immediate (e.g., hard-coded) value passed with the instruction.In some embodiments, the 128-bit instruction format 710 includes an access/addressing mode field 726 that, for example, specifies whether to use a direct register addressing mode or an indirect register addressing mode. When using direct register addressing mode, the register address of one or more operands is directly provided by the bit in the instruction.In some embodiments, the 128-bit instruction format 710 includes an access/addressing mode field 726 that specifies the addressing mode and/or access mode of the instruction. In one embodiment, the access mode is used to define the alignment of data access for instructions. Some embodiments support access modes including a 16-byte aligned access mode and a 1-byte aligned access mode, where the byte alignment of the access mode determines the access alignment of the instruction operand. For example, when in the first mode, the instruction can use byte-aligned addressing for the source and destination operands, and when in the second mode, the instruction can use 16-byte-aligned addressing for all sources Operand and destination operand.In one embodiment, the addressing mode portion of the access/addressing mode field 726 determines whether the instruction will use direct addressing or indirect addressing. When using direct register addressing mode, the bits in the instruction directly provide the register address of one or more operands. When using the indirect register addressing mode, the register address of one or more operands can be calculated based on the address register value and the address immediate digit field in the instruction.In some embodiments, the instructions are grouped based on the opcode 712-bit field to simplify opcode decoding 740. For 8-bit opcodes, bit 4, bit 5, and bit 6 allow the execution unit to determine the type of opcode. The exact opcode grouping shown is only an example. In some embodiments, the move and logic operation code group 742 includes data movement and logic instructions (eg, move (mov), compare (cmp)). In some embodiments, the move and logic group 742 shares the five most significant bits (MSB), where the move (mov) instruction takes the form of 0000xxxxb, and the logical instruction takes the form of 0001xxxxb. The flow control instruction group 744 (for example, call (call), jump (jmp)) includes instructions in the form of 0010xxxxb (for example, 0x20). The miscellaneous command group 746 includes a mixture of commands, including synchronization commands (for example, wait, send) in the form of 0011xxxxb (for example, 0x30). The parallel math instruction group 748 includes component-wise arithmetic instructions (e.g., add, multiply (mul)) in the form of 0100xxxxb (e.g., 0x40). The parallel math instruction group 748 performs arithmetic operations in parallel across the data channels. The vector math group 750 includes arithmetic instructions (for example, dp4) in the form of 0101xxxxb (for example, 0x50). The vector math group performs arithmetic on vector operands, such as dot product calculations. In one embodiment, the illustrated opcode decoding 740 may be used to determine which part of the execution unit will be used to execute the decoded instruction. For example, some instructions may be designated as systolic instructions to be executed by the systolic array. Other instructions, such as ray tracing instructions (not shown), can be routed to the ray tracing core or ray tracing logic within the slice or partition where the logic is executed.Graphics pipelineFIG. 8 is a block diagram of another embodiment of a graphics processor 800. As shown in FIG. Those elements of FIG. 8 having the same reference numerals (or names) as the elements of any other figures herein can operate or function in any manner similar to those described elsewhere herein, but are not limited thereto.In some embodiments, the graphics processor 800 includes a graphics pipeline 820, a media pipeline 830, a display engine 840, a thread execution logic 850, and a rendering output pipeline 870. In some embodiments, the graphics processor 800 is a graphics processor in a multi-core processing system that includes one or more general-purpose processing cores. The graphics processor is controlled by register writing to one or more control registers (not shown), or via commands issued to the graphics processor 800 through the ring interconnect 802. In some embodiments, the ring interconnect 802 couples the graphics processor 800 to other processing components, such as other graphics processors or general purpose processors. The commands from the ring interconnect 802 are interpreted by the command streamer 803, which supplies the commands to the various components of the geometric pipeline 820 or the media pipeline 830.In some embodiments, the command streamer 803 guides the operation of the vertex fetcher 805, which reads vertex data from the memory and executes the vertex processing commands provided by the command streamer 803. In some embodiments, the vertex extractor 805 provides vertex data to the vertex shader 807, which performs coordinate space transformation and lighting operations on each vertex. In some embodiments, the vertex fetcher 805 and the vertex shader 807 execute vertex processing instructions by dispatching execution threads to the execution units 852A-852B via the thread dispatcher 831.In some embodiments, the execution units 852A-852B are arrays of vector processors with instruction sets for performing graphics operations and media operations. In some embodiments, execution units 852A-852B have attached L1 cache 851 dedicated to each array or shared between arrays. The cache can be configured as a data cache, an instruction cache, or partitioned as a single cache containing data and instructions in different partitions.In some embodiments, the geometric pipeline 820 includes a tessellation component for performing hardware accelerated tessellation of 3D objects. In some embodiments, the programmable hull shader 811 is configured for tessellation operations. The programmable domain shader 817 provides back-end evaluation of the tessellation output. The tessellator 813 operates under the instruction of the hull shader 811, and includes dedicated logic for generating a detailed set of geometric objects based on a rough geometric model, which is provided as an input to the geometric pipeline 820. In some embodiments, if tessellation is not used, the tessellation components (eg, hull shader 811, tessellator 813, and domain shader 817) can be bypassed. The tessellation component may operate based on the data received from the vertex shader 807.In some embodiments, the complete geometry object may be processed by the geometry shader 819 via one or more threads dispatched to the execution units 852A-852B, or it may proceed directly to the clipper 829. In some embodiments, the geometry shader operates on the entire geometry object instead of on vertices or vertex patches as in previous stages of the graphics pipeline. If tessellation is disabled, the geometry shader 819 receives input from the vertex shader 807. In some embodiments, the geometry shader 819 is programmable by the geometry shader program to perform geometric tessellation when the tessellation unit is disabled.Before rasterization, the clipper 829 processes the vertex data. The clipper 829 may be a fixed function clipper or a programmable clipper with clipping and geometry shader functions. In some embodiments, the rasterizer and depth test component 873 in the rendering output pipeline 870 dispatches a pixel shader to convert the geometric object into a pixel-by-pixel representation. In some embodiments, the pixel shader logic is included in the thread execution logic 850. In some embodiments, the application can bypass the rasterizer and depth test component 873, and access the un-rasterized vertex data via the outflow unit 823.The graphics processor 800 has an interconnection bus, an interconnection structure, or some other interconnection mechanism that allows data and messages to pass among the main components of the processor. In some embodiments, the execution units 852A-852B and the associated logic units (eg, L1 cache 851, sampler 854, texture cache 858, etc.) are interconnected via data port 856 in order to perform memory access and process The rendering output pipeline components of the processor communicate. In some embodiments, the sampler 854, caches 851, 858, and execution units 852A-852B each have a separate memory access path. In one embodiment, the texture cache 858 may also be configured as a sampler cache.In some embodiments, the rendering output pipeline 870 includes a rasterizer and depth test component 873, which converts vertex-based objects into associated pixel-based representations. In some embodiments, the rasterizer logic includes a windower/masker unit for performing fixed function triangle and line rasterization. The associated render cache 878 and depth cache 879 are also available in some embodiments. The pixel operation component 877 performs pixel-based operations on data, but in some instances, pixel operations associated with 2D operations (for example, using mixed bit-block image transfer) are performed by the 2D engine 841, or are controlled by the display during display The device 843 uses a superimposed display plane instead. In some embodiments, a shared L3 cache 875 can be used for all graphics components, allowing data to be shared without the use of main system memory.In some embodiments, the media pipeline 830 includes a media engine 837 and a video front end 834. In some embodiments, the video front end 834 receives pipeline commands from the command streamer 803. In some embodiments, the media pipeline 830 includes a separate command streamer. In some embodiments, the video front end 834 processes the media command before sending the command to the media engine 837. In some embodiments, the media engine 837 includes a thread generation function for generating threads for dispatch to the thread execution logic 850 via the thread dispatcher 831.In some embodiments, the graphics processor 800 includes a display engine 840. In some embodiments, the display engine 840 is external to the processor 800 and is coupled to the graphics processor via a ring interconnect 802, or some other interconnect bus or structure. In some embodiments, the display engine 840 includes a 2D engine 841 and a display controller 843. In some embodiments, the display engine 840 contains dedicated logic that can operate independently of the 3D pipeline. In some embodiments, the display controller 843 is coupled with a display device (not shown), which may be a system integrated display device (such as in a laptop computer), or an external display device attached via a display device connector display screen.In some embodiments, the geometric pipeline 820 and the media pipeline 830 may be configured to perform operations based on multiple graphics and media programming interfaces, and are not dedicated to any kind of application programming interface (API). In some embodiments, the driver software of the graphics processor converts API calls dedicated to a particular graphics or media library into commands that can be processed by the graphics processor. In some embodiments, support is provided for Open Graphics Library (OpenGL), Open Computing Language (OpenCL), and/or Vulkan graphics and computing APIs all from Khronos Group. In some embodiments, support can also be provided for the Direct3D library from Microsoft Corporation. In some embodiments, a combination of these libraries can be supported. It can also provide support for the open source computer vision library (OpenCV). If the mapping from the pipeline of the future API to the pipeline of the graphics processor can be performed, the future API with a compatible 3D pipeline will also be supported.Graphics pipeline programmingFigure 9A is a block diagram illustrating a graphics processor command format 900 that can be used to program a graphics processing pipeline according to some embodiments. FIG. 9B is a block diagram illustrating a graphics processor command sequence 910 according to an embodiment. The solid-line box in FIG. 9A illustrates the component parts generally included in the graphics command, and the dashed line includes the component parts that are optional or only included in a subset of the graphics command. The exemplary graphics processor command format 900 of FIG. 9A includes a data field for identifying a client 902 of a command, a command operation code (operation code) 904, and a data field 906. The sub-opcode 905 and the command size 908 are also included in some commands.In some embodiments, the client 902 specifies the client unit of the graphics device that processes the command data. In some embodiments, the graphics processor command parser examines the client field of each command to adjust further processing of the command and route the command data to the appropriate client unit. In some embodiments, the graphics processor client unit includes a memory interface unit, a rendering unit, a 2D unit, a 3D unit, and a media unit. Each client unit has a corresponding processing pipeline for processing commands. Once the command is received by the client unit, the client unit reads the opcode 904 and the sub-opcode 905 (if any) to determine the operation to be performed. The client unit uses the information in the data field 906 to execute the command. For some commands, an explicit command size 908 is expected to specify the size of the command. In some embodiments, the command parser automatically determines the size of at least some of the commands based on the command opcode. In some embodiments, commands are aligned via multiples of double words. Other command formats can be used.The flow in FIG. 9B illustrates an exemplary graphics processor command sequence 910. In some embodiments, the software or firmware of a data processing system featuring an embodiment of a graphics processor uses a certain version of the illustrated command sequence to establish, execute, and terminate a set of graphics operations. The sample command sequence is shown and described for exemplary purposes only, as the embodiments are not limited to these specific commands or the command sequence. Moreover, the commands can be issued as a batch of commands in a sequence of commands, so that the graphics processor will process the sequence of commands in an at least partially simultaneous manner.In some embodiments, the graphics processor command sequence 910 may begin with the pipeline dump clean command 912 in order to cause any active graphics pipeline to complete the pipeline's currently pending commands. In some embodiments, the 3D pipeline 922 and the media pipeline 924 do not operate concurrently. Perform pipeline dump cleanup so that the active graphics pipeline completes any pending commands. In response to the pipeline dump clearing, the command parser for the graphics processor will suspend command processing until the active drawing engine completes the pending operations and the related read cache is invalidated. Optionally, any data marked as "dirty" in the rendering cache can be dumped to memory. In some embodiments, the pipeline dump purge command 912 can be used for pipeline synchronization, or before putting the graphics processor into a low power state.In some embodiments, the pipeline selection command 913 is used when the command sequence requires the graphics processor to explicitly switch between pipelines. In some embodiments, the pipeline selection command 913 is only required once in the execution context before issuing the pipeline command, unless the context is to issue commands for two pipelines. In some embodiments, the pipeline dump purge command 912 is required immediately before the pipeline switch via the pipeline selection command 913.In some embodiments, the pipeline control command 914 configures the graphics pipeline for operation and is used to program the 3D pipeline 922 and the media pipeline 924. In some embodiments, the pipeline control command 914 configures the pipeline state of the active pipeline. In one embodiment, the pipeline control command 914 is used for pipeline synchronization and is used to clear data from one or more cache memories in the active pipeline before processing a batch of commands.In some embodiments, commands related to the return buffer status 916 are used to configure the set of return buffers for the corresponding pipeline to write data. Some pipeline operations need to allocate, select, or configure one or more return buffers, and the operation writes intermediate data into the one or more return buffers during processing. In some embodiments, the graphics processor also uses one or more return buffers to store output data and perform cross-thread communication. In some embodiments, the return buffer state 916 includes selecting the size and number of return buffers of the set to be used for pipeline operations.The remaining commands in the command sequence differ based on the active pipeline used for the operation. Based on the pipeline decision 920, the command sequence is customized for the 3D pipeline 922 starting with the 3D pipeline state 930, or the media pipeline 924 starting with the media pipeline state 940.The commands for configuring the 3D pipeline state 930 include 3D state setting commands for vertex buffer state, vertex element state, constant color state, depth buffer state, and other state variables that will be configured before processing 3D primitive commands. The values of these commands are determined based at least in part on the specific 3D API in use. In some embodiments, if certain pipeline elements will not be used, the 3D pipeline status 930 command can also selectively disable or bypass those elements.In some embodiments, the 3D primitive 932 command is used to submit 3D primitives to be processed by the 3D pipeline. The commands and associated parameters passed to the graphics processor via the 3D primitive 932 command are forwarded to the vertex fetching function in the graphics pipeline. The vertex extraction function uses the 3D primitive 932 command data to generate multiple vertex data structures. The vertex data structure is stored in one or more return buffers. In some embodiments, the 3D primitive 932 command is used to perform vertex operations on 3D primitives via the vertex shader. To process the vertex shader, the 3D pipeline 922 dispatches the shader execution thread to the graphics processor execution unit.In some embodiments, the 3D pipeline 922 is triggered via execution of 934 commands or events. In some embodiments, a register write triggers command execution. In some embodiments, execution is triggered via a "go" or "kick" command in the command sequence. In one embodiment, pipeline synchronization commands are used to trigger command execution, so that the sequence of clearing commands is dumped through the graphics pipeline. The 3D pipeline will perform geometric processing for 3D primitives. Once the operation is completed, the obtained geometric objects are rasterized, and the pixel engine colorizes the obtained pixels. For those operations, additional commands for controlling pixel shading and pixel back-end operations can also be included.In some embodiments, the graphics processor command sequence 910 follows the media pipeline 924 path when performing media operations. Generally, the specific use and manner of programming for the media pipeline 924 depends on the media or computing operation to be performed. During media decoding, specific media decoding operations can be transferred to the media pipeline. In some embodiments, the media pipeline can also be bypassed, and resources provided by one or more general-purpose processing cores can be used to perform media decoding in whole or in part. In one embodiment, the media pipeline further includes elements for general graphics processing unit (GPGPU) operations, where the graphics processor is used to perform SIMD vector operations using computational shader programs, which are not explicitly Related to the rendering of graphics primitives.In some embodiments, the media pipeline 924 is configured in a similar manner to the 3D pipeline 922. The command set for configuring the media pipeline state 940 is dispatched or placed in the command queue, before the media object command 942. In some embodiments, the command 940 for the media pipeline state includes data for configuring media pipeline elements that will be used to process media objects. This includes data used to configure video decoding and video encoding logic within the media pipeline, such as encoding or decoding formats. In some embodiments, the command 940 for media pipeline state also supports the use of one or more pointers to "indirect" state elements that contain batch state settings.In some embodiments, the media object command 942 supplies pointers to media objects for processing by the media pipeline. The media object includes a memory buffer that contains video data to be processed. In some embodiments, all media pipeline states must be valid before the media object command 942 is issued. Once the pipeline state is configured and the media object command 942 is queued, the media pipeline 924 is triggered via execution of the command 944 or equivalent execution event (e.g., register write). The output from the media pipeline 924 can then be post-processed through operations provided by the 3D pipeline 922 or the media pipeline 924. In some embodiments, GPGPU operations are configured and executed in a similar manner to media operations.Graphics software architectureFIG. 10 illustrates an exemplary graphics software architecture for the data processing system 1000 according to some embodiments. In some embodiments, the software architecture includes a 3D graphics application 1010, an operating system 1020, and at least one processor 1030. In some embodiments, the processor 1030 includes a graphics processor 1032 and one or more general-purpose processor cores 1034. The graphics application 1010 and the operating system 1020 are each executed in the system memory 1050 of the data processing system.In some embodiments, the 3D graphics application 1010 includes one or more shader programs, and the one or more shader programs include shader instructions 1012. The shader language instructions may use high-level shader languages, such as DirectD's High-Level Shader Language (HLSL), OpenGL Shader Language (GLSL), and so on. The application also includes executable instructions 1014 in machine language suitable for execution by the general-purpose processor core 1034. The application also includes graphical objects 1016 defined by vertex data.In some embodiments, the operating system 1020 is an operating system from Microsoft Corporation, a proprietary UNIX-like operating system, or an open source UNIX-like operating system using a variant of the Linux kernel. The operating system 1020 may support a graphics API 1022, such as Direct3D API, OpenGL API, or Vulkan API. When Direct3DAPI is in use, the operating system 1020 uses the front-end shader compiler 1024 to compile any shader instructions 1012 that adopt HLSL into a lower-level shader language. Compilation can be just-in-time (JIT) compilation, or pre-compilation of application executable shaders. In some embodiments, during compilation of the 3D graphics application 1010, high-level shaders are compiled into low-level shaders. In some embodiments, the shader instructions 1012 are provided in an intermediate form, such as a certain version of the standard portable intermediate representation (SPIR) used by the Vulkan API.In some embodiments, the user-mode graphics driver 1026 includes a back-end shader compiler 1027, which is used to convert the shader instructions 1012 into a hardware-specific representation. When the OpenGL API is in use, the shader instructions 1012 using the GLSL high-level language are passed to the user-mode graphics driver 1026 for compilation. In some embodiments, the user mode graphics driver 1026 uses operating system kernel mode functions 1028 to communicate with the kernel mode graphics driver 1029. In some embodiments, the kernel mode graphics driver 1029 communicates with the graphics processor 1032 to dispatch commands and instructions.IP core implementationOne or more aspects of at least one embodiment may be implemented by representative code stored on a machine-readable medium that represents and/or defines logic within an integrated circuit (such as a processor). For example, the machine-readable medium may include instructions that represent various logic within the processor. When read by a machine, the instructions can cause the machine to make logic for performing the techniques described herein. This type of representation (called "IP core") is the reusable unit of the logic of the integrated circuit. These reusable units can be used as a hardware model describing the structure of the integrated circuit and stored on a tangible, machine-readable medium. . The hardware model can be supplied to each consumer or manufacturing facility that loads the hardware model on a manufacturing machine that manufactures integrated circuits. The integrated circuit can be manufactured such that the circuit performs the operations described in association with any of the embodiments described herein.FIG. 11A is a block diagram illustrating an IP core development system 1100 according to an embodiment, which may be used to manufacture integrated circuits to perform operations. The IP core development system 1100 can be used to generate modular, reusable designs that can be incorporated into larger designs or used to build entire integrated circuits (eg, SOC integrated circuits). The design facility 1130 can generate a software simulation 1110 of an IP core design using a high-level programming language (for example, C/C++). The software simulation 1110 can be used to use the simulation model 1112 to design, test, and verify the behavior of the IP core. The simulation model 1112 may include functional simulation, behavior simulation, and/or timing simulation. A register transfer level (RTL) design 1115 can then be created from the simulation model 1112 or synthesized. The RTL design 1115 is an abstraction of the behavior of an integrated circuit (including associated logic executed using the modeled digital signal) that models the flow of digital signals between hardware registers. In addition to RTL design 1115, lower-level designs at the logic level or transistor level can also be created, designed, or synthesized. As a result, the specific details of the initial design and simulation can be different.The RTL design 1115 or equivalent solution can be further synthesized by the design facility into the hardware model 1120, which can adopt a hardware description language (HDL) or some other representation of physical design data. The HDL can be further simulated or tested to verify the IP core design. The non-volatile memory 1140 (eg, hard disk, flash memory, or any non-volatile storage medium) may be used to store the IP core design for delivery to a third-party manufacturing facility 1165. Alternatively, the IP core design can be transmitted through a wired connection 1150 or a wireless connection 1160 (for example, via the Internet). The manufacturing facility 1165 can then manufacture integrated circuits based at least in part on the IP core design. The manufactured integrated circuit may be configured to perform operations according to at least one embodiment described herein.FIG. 11B illustrates a cross-sectional side view of an integrated circuit package assembly 21170 according to some embodiments described herein. The integrated circuit package assembly 1170 illustrates the implementation of one or more processors or accelerator devices as described herein. The package assembly 1170 includes a plurality of hardware logic units 1172, 1174 connected to the substrate 1180. The logic 1172, 1174 may be at least partially implemented in configurable logic or fixed-function logic hardware, and may include any processing in the processor core(s), graphics processor(s) or other accelerator devices described herein One or more parts of a processor core, graphics processor, or other accelerator device. Each logic unit 1172, 1174 may be implemented in a semiconductor die and coupled with the substrate 1180 via an interconnect structure 1173. The interconnect structure 1173 may be configured to route electrical signals between the logic 1172, 1174 and the substrate 1180, and may include interconnects such as, but not limited to, bumps or pillars. In some embodiments, the interconnect structure 1173 may be configured to route electrical signals, such as, for example, input/output (I/O) signals and/or power or ground signals associated with the operation of logic 1172, 1174. In some embodiments, the substrate 1180 is an epoxy-based laminate substrate. In other embodiments, the package substrate 1180 may include other suitable types of substrates. The package assembly 1170 may be connected to other electrical devices via the package interconnect 1183. The package interconnect 1183 may be coupled to the surface of the substrate 1180 to route electrical signals to other electrical devices, such as motherboards, other chipsets, or multi-chip modules.In some embodiments, the logic unit 1172, 1174 is electrically coupled with a bridge 1182, which is configured to route electrical signals between the logic 1172 and the logic 1174. The bridge 1182 may be a dense interconnect structure that provides routing for electrical signals. The bridge 1182 may include a bridge substrate composed of glass or a suitable semiconductor material. Circuit features can be formed on the bridge substrate to provide chip-to-chip connections between logic 1172 and logic 1174.Although two logic cells 1172, 1174 and a bridge 1182 are illustrated, the embodiments described herein may include more or fewer logic cells on one or more dies. The one or more dies may be connected by zero or more bridges, because when logic is included on a single die, the bridge 1182 may be excluded. Alternatively, multiple dies or logic units may be connected by one or more bridges. In addition, in other possible configurations (including three-dimensional configurations), multiple logic units, dies, and bridges can be connected together.FIG. 11C illustrates a package assembly 1190 that includes a plurality of unit hardware logic chiplets connected to a substrate 1180. The graphics processing unit, parallel processor, and/or computing accelerator as described herein may be composed of various silicon chiplets manufactured separately. In this context, a chiplet is an at least partially packaged integrated circuit that includes different logic units that can be assembled with other chiplets into a larger package. Various sets of chiplets with different IP core logic can be assembled into a single device. In addition, active interposer technology can be used to integrate chiplets into a base die or base chiplet. The concepts described in this article enable the interconnection and communication between different forms of IP within the GPU. IP cores can be manufactured by using different process technologies and composed during manufacturing, which avoids the complexity of gathering multiple IPs into the same manufacturing process, especially for large SoCs with several styles of IP. Allowing multiple process technologies improves time to market and provides a cost-effective way to create multiple product SKUs. In addition, the decomposed IP is more suitable for independent power gating, which can turn off components that are not used on a given workload, thereby reducing overall power consumption.In various embodiments, the package component 1190 may include components and chiplets interconnected by a structure 1185 and/or one or more bridges 1187. The chiplets in the package assembly 1190 may have a 2.5D arrangement using Chip-on-Wafer-on-Substrate stacking, in which multiple dies are stacked side by side on a silicon interposer 1189 Above, the silicon interposer 1189 couples the chiplet with the substrate 1180. The substrate 1180 includes electrical connections to the package interconnect 1183. In one embodiment, the silicon interposer 1189 is a passive interposer that includes through silicon vias (TSV) to electrically couple the chiplets in the package assembly 1190 to the substrate 1180. In one embodiment, the silicon interposer 1189 is an active interposer, and the active interposer includes embedded logic in addition to the TSV. In such embodiments, the chiplets within the package assembly 1190 are arranged above the active interposer 1189 using a 3D face-to-face die stack. In addition to the interconnect structure 1185 and the silicon bridge 1187, the active interposer 1189 may also include hardware logic for I/O 1191, cache memory 1192, and other hardware logic 1193. The structure 1185 realizes the communication between each logic chiplet 1172, 1174 and the logic 1191, 1193 in the active interposer 1189. The structure 1185 may be a NoC interconnection or another form of packet-switched structure that exchanges data packets between the components of the package assembly. For complex components, the structure 1185 may be a dedicated chiplet that implements communication between the various hardware logics of the package component 1190.The bridge structure 1187 within the active interposer 1189 can be used to facilitate point-to-point interconnection between the logic or I/O chiplet 1174 and the memory chiplet 1175, for example. In some implementations, the bridge structure 1187 may also be embedded within the substrate 1180. The hardware logic chiplets may include dedicated hardware logic chiplets 1172, logic or I/O chiplets 1174, and/or memory chiplets 1175. The hardware logic chiplet 1172 and the logic or I/O chiplet 1174 can be at least partially implemented in configurable logic or fixed-function logic hardware, and can include the processor core(s) and graphics(s) described herein. One or more parts of any processor core, graphics processor, parallel processor, or other accelerator device in a processor, parallel processor, or other accelerator device. The memory chiplet 1175 may be DRAM (eg, GDDR, HBM) memory or cache (SRAM) memory. The cache memory 1192 in the active interposer 1189 (or substrate 1180) can act as a global cache for the package component 1190, as part of a distributed global cache, or as a dedicated cache for the structure 1185.Each chiplet can be manufactured as a separate semiconductor die and coupled with a base die, which is embedded in the substrate 1180 or coupled with the substrate 1180. The coupling with the substrate 1180 may be performed via the interconnect structure 1173. The interconnect structure 1173 may be configured to route electrical signals between various chiplets and logic within the substrate 1180. The interconnect structure 1173 may include interconnects, such as but not limited to bumps or pillars. In some embodiments, the interconnect structure 1173 may be configured to route electrical signals, such as, for example, input/output (I/O) signals associated with the operation of logic chiplets, I/O chiplets, and memory chiplets, and / Or power signal or ground signal. In one embodiment, an additional interconnect structure couples the active interposer 1189 with the substrate 1180.In some embodiments, the substrate 1180 is an epoxy-based laminate substrate. In other embodiments, the substrate 1180 may include other suitable types of substrates. The package assembly 1190 may be connected to other electrical devices via the package interconnect 1183. The package interconnect 1183 may be coupled to the surface of the substrate 1180 to route electrical signals to other electrical devices, such as motherboards, other chipsets, or multi-chip modules.In some embodiments, the logic or I/O chiplet 1174 and the memory chiplet 1175 may be electrically coupled via a bridge 1187 that is configured to be between the logic or I/O chiplet 1174 and the memory chiplet 1175 Route electrical signals. The bridge 1187 may be a dense interconnect structure that provides routing for electrical signals. The bridge 1187 may include a bridge substrate composed of glass or a suitable semiconductor material. Circuit routing features can be formed on the bridge substrate to provide chip-to-chip connections between logic or I/O chiplet 1174 and memory chiplet 1175. The bridge 1187 may also be referred to as a silicon bridge or an interconnect bridge. For example, in some embodiments, the bridge 1187 is an embedded multi-die interconnect bridge (EMIB). In some embodiments, the bridge 1187 may simply be a direct connection from one chiplet to another chiplet.FIG. 11D illustrates a package assembly 1194 including an interchangeable chiplet 1195 according to an embodiment. The interchangeable chiplets 1195 can be assembled into standardized slots on one or more basic chiplets 1196, 1198. The base chiplets 1196, 1198 may be coupled via a bridge interconnect 1197, which may be similar to other bridge interconnects described herein, and may be, for example, EMIB. Memory chiplets can also be connected to logic or I/O chiplets via bridge interconnects. I/O and logic chiplets can communicate via interconnect structures. The basic chiplets can each support one or more sockets in a standardized format for logic or I/O or memory/cache.In one embodiment, the SRAM and power delivery circuit can be fabricated into one or more of the basic chiplets 1196, 1198, and the basic chiplets 1196, 1198 can use a different process technology than the interchangeable chiplet 1195. Manufactured, interchangeable chiplets 1195 are stacked on top of the base chiplets. For example, a larger process technology can be used to manufacture the basic chiplets 1196, 1198, while a smaller process technology can be used to manufacture an interchangeable chiplet. One or more of the interchangeable chiplets 1195 may be memory (e.g., DRAM) chiplets. Different memory densities can be selected for the package component 1194 based on the power and/or new energy for the product using the package component 1194. In addition, logic chiplets having different numbers of types of functional units can be selected based on the power and/or performance of the product during assembly. In addition, chiplets containing different types of IP logic cores can be inserted into the interchangeable chiplet sockets, thereby enabling a hybrid memory design that can mix and match IP blocks of different technologies.Exemplary System-on-Chip Integrated CircuitFigures 12 and 13A-13B illustrate exemplary integrated circuits and associated graphics processors that can be manufactured using one or more IP cores in accordance with various embodiments described herein. In addition to the content illustrated, other logics and circuits may be included, including additional graphics processors/cores, peripheral interface controllers, or general-purpose processor cores.FIG. 12 is a block diagram illustrating an exemplary system-on-chip integrated circuit 1200 that can be manufactured using one or more IP cores according to an embodiment. The exemplary integrated circuit 1200 includes one or more application processors 1205 (eg, CPU), at least one graphics processor 1210, and may additionally include an image processor 1215 and/or a video processor 1220, any of which may be It is a modular IP core from the same design facility or multiple different design facilities. The integrated circuit 1200 includes peripheral or bus logic, including a USB controller 1225, a UART controller 1230, an SPI/SDIO controller 1235, and an I2S/I2C controller 1240. In addition, the integrated circuit may include a display device 1245 that is coupled to one or more of a high-definition multimedia interface (HDMI) controller 1250 and a mobile industry processor interface (MIPI) display interface 1255. Storage may be provided by the flash memory subsystem 1260 (including flash memory and flash memory controller). A memory interface may be provided via the memory controller 1265 to obtain access to the SDRAM or SRAM memory device. Some integrated circuits additionally include an embedded security engine 1270.13A-13B are block diagrams illustrating an exemplary graphics processor for use in an SoC according to embodiments described herein. FIG. 13A illustrates an exemplary graphics processor 1310 of a system-on-chip integrated circuit that can be manufactured using one or more IP cores according to an embodiment. Figure 13B illustrates an additional exemplary graphics processor 1340 of a system-on-chip integrated circuit that can be manufactured using one or more IP cores, according to an embodiment. The graphics processor 1310 of FIG. 13A is an example of a low-power graphics processor core. The graphics processor 1340 of FIG. 13B is an example of a higher performance graphics processor core. Each of the graphics processor 1310 and the graphics processor 1340 may be a variation of the graphics processor 1210 of FIG. 12.As shown in FIG. 13A, the graphics processor 1310 includes a vertex processor 1305 and one or more fragment processors 1315A-1315N (e.g., 1315A, 1315B, 1315C, 1315D, up to 1315N-1 and 1315N). The graphics processor 1310 can execute different shader programs via separate logic, so that the vertex processor 1305 is optimized to perform operations for the vertex shader program, and one or more fragment processors 1315A-1315N perform operations for fragments or A fragment (eg, pixel) shading operation of a pixel shader program. The vertex processor 1305 executes the vertex processing stage of the 3D graphics pipeline, and generates primitive data and vertex data. The fragment processor(s) 1315A-1315N uses the primitive data and vertex data generated by the vertex processor 1305 to generate a frame buffer to be displayed on the display device. In one embodiment, the fragment processor(s) 1315A-1315N are optimized to execute fragment shader programs as provided in the OpenGL API, and these fragment shader programs can be used to execute the same as those provided in the Direct 3D API. Similar operations to pixel shader programs.The graphics processor 1310 additionally includes one or more memory management units (MMU) 1320A-1320B, cache(s) 1325A-1325B, and circuit interconnect(s) 1330A-1330B. The one or more MMUs 1320A-1320B provide virtual-to-physical address mapping for the graphics processor 1310 (including the vertex processor 1305 and/or the fragment processor(s) 1315A-1315N), in addition to storage in one or more high-speed In addition to the vertex data or image/texture data in the cache 1325A-1325B, the virtual-to-physical address mapping can also reference vertex data or image/texture data stored in the memory. In one embodiment, one or more MMUs 1320A-1320B can be synchronized with other MMUs in the system, so that each processor 1205-1220 can participate in a shared or unified virtual memory system. Other MMUs in the system include those shown in FIG. 12 One or more MMUs associated with one or more application processors 1205, image processors 1215, and/or video processors 1220. According to an embodiment, one or more circuit interconnections 1330A-1330B enable the graphics processor 1310 to interface with other IP cores in the SoC via the internal bus of the SoC or via a direct connection.As shown in FIG. 13B, the graphics processor 1340 includes one or more MMUs 1320A-1320B, caches 1325A-1325B, and circuit interconnections 1330A-1330B of the graphics processor 1310 of FIG. 13A. The graphics processor 1340 includes one or more shader cores 1355A-1355N (for example, 1355A, 1355B, 1355C, 1355D, 1355E, 1355F, up to 1355N-1 and 1355N), which provide unified shading In the unified shader core architecture, a single core or type or core can execute all types of programmable shader code, including shaders used to implement vertex shaders, fragment shaders, and/or compute shaders code. The unified shader core architecture can also be configured to execute directly compiled high-level GPGPU programs (for example, CUDA). The exact number of shader cores present can vary from embodiment to implementation. In addition, the graphics processor 1340 includes an inter-core task manager 1345, which serves as a thread dispatcher for dispatching execution threads to one or more shader cores 1355A-1355N and for accelerating pair-based The fragmentation unit 1358 of the fragmentation operation of the rendering, in the fragment-based rendering, the rendering operation for the scene is subdivided in the image space, for example, to take advantage of the local spatial consistency within the scene or to optimize the use of internal cache.System architecture for cloud gamingFIG. 14 illustrates frame encoding and decoding for the cloud gaming system 1400. The client 1440 may generally be a consumer of graphics (eg, games, virtual reality/VR, augmented reality/AR) content hosted, processed, and rendered on the server 1420. The illustrated scalable server 1420 is capable of providing graphical content to multiple clients simultaneously (e.g., by utilizing parallel and allocated processing and rendering resources). The server 1420 includes a graphics processor 1430 (for example, GPU), a host processor 1424 (for example, CPU), and a network interface controller (NIC) 1422. The NIC 1422 may receive a request for graphics content from the client 1440. The request from the client 1440 may cause the graphics content to be retrieved from memory via an application executing on the host processor 1424. The host processor 1424 may perform advanced operations such as, for example, determining the position, collision, and movement of objects in a given scene. Based on high-level operations, the host processor 1424 may generate rendering commands, which are combined with scene data and executed by the graphics processor 1430. The rendering commands may cause the graphics processor 1430 to define scene geometry, shading, lighting, motion, textures, camera parameters, etc., for the scene to be rendered via the client 1440.More specifically, the illustrated graphics processor 1430 includes a graphics renderer 1432, and the graphics renderer 1432 executes a rendering program according to a rendering command generated by the host processor 1424. The output of the graphics renderer 1432 may be a stream of original video frames provided to the frame grabber 1434. The illustrated frame capturer 1434 is coupled to the encoder 1436, and the encoder 1436 can compress/format the original video stream for transmission through the network 1410. The encoder 1436 can use a variety of video compression algorithms, such as, for example, the H.264 standard from the International Telecommunication Union Telecommunication Standardization Sector (ITUT), the MPEG4 Advanced from the International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) Video Coding (AVC) standard, etc.The client 1440 shown includes a NIC 1442, which is used to receive the transmitted video stream from the server 1420. The client 1440 can be a desktop computer, a notebook computer, a tablet computer, a convertible tablet computer, a wearable device, and a mobile Internet. Devices, smart phone devices, personal digital assistants, media players, etc. The NIC 1422 may include the basis of the physical layer and the software layer of the network interface in the client 1440 to facilitate communication through the network 1410. The client 1440 may also include a decoder 1444, which uses the same formatting/compression scheme as the encoder 1436. Therefore, the decompressed video stream can be provided from the decoder 1444 to the video renderer 1446. The video renderer 1446 shown is coupled to a display 1448, which visually presents graphical content.The client 1440 can perform real-time interactive data streaming. The real-time interactive data streaming includes collecting user input from the input device 1450 and transmitting the user input to the server 1420 via the network 1410. This real-time interactive part of cloud gaming may challenge latency. Described in this article is a cloud gaming system that enables games that are not sensitive to latency to be executed in a cloud data center, while games that are more sensitive to latency are executed on servers at the edge of the cloud gaming network . Edge servers may be geographically dispersed, so that servers with relatively low latency relative to client devices can be selected. In the case where the client device includes a high-performance GPU and/or for games that are extremely sensitive to latency, the graphics operations for cloud-based games can be executed directly on the client device. In such cases, both the server 1420 and the client 1440 can reside on the same computing device, where the network 1410 is an internal network connection on the client.Figure 15 illustrates a cloud gaming system 1500 in which game servers are distributed across multiple cloud and data center systems. The cloud gaming system 1500 includes a cloud authentication node 1501 and a function as a service (FaaS) endpoint 1503. The cloud authentication node 1501 and the FaaS endpoint 1503 are each in electronic communication with a browser client 1515. The cloud gaming system 1500 also includes a telemetry server 1507. The telemetry server 1507 receives system telemetry from the FaaS endpoint 1503 and the GPU server that is executing the game application. The cloud gaming system 1500 additionally includes a peer server 1517 and a STUN server 1519. The peer server 1517 and the STUN server 1519 facilitate the establishment of a network connection between the client and the GPU server.In one embodiment, the cloud game system 1500 includes an orchestration master device 1505, and the orchestration master device 1505 manages the execution nodes and storage containers of the cloud game system 1500, and multiple sets 1509, 1511, 1513 of GPU servers. Multiple sets of GPU servers may reside on different cloud networks associated with different cloud service providers or on co-located data centers. For example, the first set of GPU servers 1509 may be provided by a first cloud service provider (for example, Microsoft Azure). The second set 1511 of GPU servers may be provided by a second cloud service provider (for example, Amazon Web Services). The third set of GPU servers 1513 may be co-located servers that are hosted at one or more co-located data centers. In one embodiment, Kubernetes is partially used to implement the cloud gaming system 1500, but not all embodiments are so limited. In such an embodiment, the orchestration master 1505 may be a Kubernetes master, and the GPU server may include a kubelet node agent.During operation, the browser client 1515 or another cloud game client such as a cloud game client application may communicate with the cloud authentication node 1501 to authenticate the client to the cloud game system 1500. The cloud authentication node 1501 returns the authorization token to the browser client 1515. The browser client 1515 can use the authorization token to request the game launch via the FaaS endpoint 1503. The FaaS endpoint 1503 communicates with the orchestration master 1505 to start the game. The game can be launched from a container running on the game server. The orchestration master 1505 selects a server from the set of GPU servers 1509, 1511, 1513, which will become the game server for the game to be launched. In one embodiment, the orchestration master 1505 may initiate the execution of pods on the selected GPU server. A pod is a group of containerized components provided by one or more containers located on the same server. The containers in the cabin can share resources. Then, the orchestration master 1505 returns the unique session ID to the FaaS endpoint 1503. Then, the FaaS endpoint 1503 provides the unique session ID to the browser client 1515 and the GPU server selected from the set of GPU servers 1509, 1511, 1513.Then, the browser client 1515 can communicate with one or more user datagram protocol (UDP) sessions through a network address translator (NAT) server (for example, STUN server 1519), so that the browser client can connect to The GPU server in the set of GPU servers 1509, 1511, 1513. For example, the STUN server 1519 enables the browser client 1515 and the selected GPU server to determine their corresponding public IP addresses. In one embodiment, the public IP address returned to the GPU server is the public IP address associated with the pod that is associated with the game to be executed by the GPU server. The browser client 1515 and the selected GPU server can each use the <session ID, public IP> tuple to register with the peer server 1517, where the session ID is the unique session ID provided by the FaaS endpoint 1503, and the public IP is Public IP provided by STUN server 1519. The peer server 1517 notifies the browser client 1515 of the existence of the selected GPU server. The peer server 1517 also notifies the selected GPU server of the existence of the browser client 1515. Once notified of each other's existence, the browser client 1515 and the selected GPU server can establish a UDP WebRTC connection to start the game. During game play, the FaaS endpoint 1503 and the selected GPU server (illustrated as being selected from the co-location set 1513 of GPU servers) can transmit telemetry to the telemetry server 1507.FIG. 16 illustrates a cloud gaming system 1600 in which cloud-based, edge-based, or client-based computing resources may be used to perform graphics processing operations. In one embodiment, the system includes a cloud-based computing, GPU, and storage system 1602, which is coupled to one or more of the edge GPU server 1604 and the client endpoint 1610 via a wide area network (WAN) such as the Internet A terminal client (for example, a high-performance client 1620, a streaming client 1630), a client endpoint 1610, such as the home network of a user of the cloud gaming system 1600. The cloud gaming system 1600 described herein enables the use of remote (eg, cloud, edge) computing and/or GPU resources to execute gaming applications without modification. Games requiring high-level graphics processing capabilities can be played on streaming clients 1630 such as thin clients with limited graphics processing capabilities relative to high-performance computing devices. The streaming client 1630 may be, for example, a TV or a TV set-top box, a game console, a streaming-based game console, or a media streaming device. The streaming client 1630 may include a web browser or a streaming application including a web client engine 1632 for connecting with servers of cloud-based computing, GPU and storage systems 1602 or edge network GPU servers 1604 , And receive the stream of game application frames from these servers.In one embodiment, the cloud-based computing, GPU, and storage system 1602 may include a set of interconnected data centers that house a large number of computing and storage resources. The cloud-based computing, GPU, and storage system 1602 may provide storage resources on which application data for games provided by the cloud gaming system 1600 may be stored. For some games, the computing resources and/or GPU resources of cloud-based computing, GPU and storage system 1602 may be used to execute these games. In particular, computing resources or GPU resources of cloud-based computing, GPU and storage system 1602 can be used to execute games that are not extremely sensitive to latency.For games that are sensitive to latency, computing resources and/or GPU resources of the GPU server 1604 located at the edge of the cloud gaming system 1600 may be used. In one embodiment, the GPU server 1604 may be located at a data center close to the end user, which reduces the perceived input wait time associated with the executed game application. The GPU server 1604 may include a set 1608 of high-performance GPUs, and the set 1608 of high-performance GPUs may be used to execute a game server stack 1606. In one configuration, a single GPU or a portion of the GPU (eg, GPU slice) may perform graphics processing operations for a single instance of the game. In other configurations, multiple GPU slices and/or multiple GPUs may cooperate to execute game applications. For example, implicit multi-GPU processing managed by the graphics driver can be performed. For games that include support for explicit multi-GPU processing, the graphics processing of the game can be distributed across multiple graphics processing devices.For games that are extremely sensitive to latency, when the client endpoint 1610 includes a high-performance client 1620 (such as a desktop or laptop gaming system with a powerful graphics processor), the cloud gaming system 1600 described here can also be used Local graphics processing for cloud-based games. When playing a cloud-based game on the high-performance client 1620, the cloud gaming system 1600 can enable at least a part of the graphics processing activities of the game to be executed by one or more local GPUs 1626 on the high-performance client 1620. In one embodiment, graphics processing for games played on the streaming client 1630 within the client endpoint 1610 can also be executed on the high-performance client 1620, where the output rendered on the high-performance client 1620 is Stream to the streaming client 1630.When the graphics operation of the game is to be performed on the high-performance client 1620, the version of the game server stack 1624 can be retrieved from the cloud-based computing, GPU, and storage system 1602. The game server stack 1624 can then be executed using one or more local GPUs 1626 on the high-performance client 1620. The game may be played via a web browser application 1622 or dedicated streaming client configured to communicate with the game server stack, cloud-based computing, GPU and storage system 1602 and/or one or more GPU servers 1604.In one embodiment, various clients and servers of the cloud gaming system 1600 may communicate via network links 1603, 1605, 1607, 1609, 1615, and 1629. In one embodiment, the network link 1603 established between the GPU server 1604 and the cloud-based computing, GPU and storage system 1602 enables the GPU server 1604 to: access the remote storage that stores the games to be executed by the GPU server 1604, And receive the control signal used to start and terminate the game application. The game data retrieved from cloud-based storage may be cached by the GPU server 1604. The rendered frames for the application can be streamed to the streaming client 1630 (via the network link 1607) or the high performance client 1620 (via the network link 1605). In the case where the game application is executed at least partially on the high-performance client 1620, the network link 1621 can be used to implement communication between the network browser application 1622 and the game server stack 1624. The network link 1609 can be used to launch game applications, and the output of the game server stack 1624 can be streamed to the web browser application 1622 via the network link 1621. The network link 1615 enables the game server stack 1624 to access the application data of the game to be executed. In the case of playing games on the streaming client 1630 and executing games on the high-performance client 1620, a network link 1627 may be established to stream the rendered frames to the streaming client 1630. The streaming client 1630 can use the network link 1629 to start a cloud game to be played via the streaming client. In one embodiment, the network links 1603, 1609, 1615, and 1629 used to transmit application data and control signals use a connection-oriented protocol, such as the Transmission Control Protocol (TCP). In one embodiment, the network links 1605, 1607, 1621, 1627 used to stream the rendered game output use a connectionless protocol, such as the User Datagram Protocol (UDP).The game application can be encapsulated into the game server stack 1606 without modifying the game application. The game server stack 1606 may include divided, containerized, and/or virtualized game applications, and associated resources and APIs for executing the game applications. In one embodiment, the libraries and APIs used by the game application are modified to enable the game to work in the cloud gaming environment, as detailed in FIGS. 17A-17B.Figures 17A-17B illustrate a system 1700 and method 1750 for encapsulating a game application so that the game can be played on a server and/or client device. Figure 17A illustrates a system 1700 for encapsulating cloud-based games into an encapsulation layer that enables cloud-based games to be executed on a server or client device. Figure 17B illustrates a method 1750 for encapsulating cloud-based games.As shown in Figure 17A, the game server stack for cloud-based games includes a container image. The container image includes application files and libraries that are executed as a process 1710 of a game application. The process 1710 of the game application includes the game core logic 1720 and the envelope 1701-1705 of the encapsulation layer, and the envelope 1701-1705 selectively relays the API commands made by the game core logic 1720. The encapsulation layer includes encapsulation for one or more graphics API 1711, system interface 1712, file system 1713, keyboard driver 1714, audio driver 1715, and mouse and/or controller driver 1716. Encapsulation can be represented by the game logic core 1720 as libraries, frameworks, and interfaces commonly used by the game core logic 1720. The encapsulation can then relay these commands to host system components or to remote computing devices connected via a network interface.For example, an envelope for one or more graphics APIs 1711 can receive API calls to graphics APIs (eg, Direct 3D, OpenGL, Vulkan) made by the game core logic 1720, and relay these commands to the host GPU 1701B and / Or a remote device connected via network 1701A. The envelope for the system interface 1712 can receive system interface commands and send these commands to the network 1702A to be relayed to a remote device or to the appropriate host system API 1702B. The envelope for the file system 1713 may receive file system commands and satisfy these commands by accessing the container image 1703A containing cloud game data or relay these commands to the host file system 1703B. The envelope for the keyboard driver 1714 can send or receive keyboard input from the network 1704A or from the host keyboard or game controller 1704B. The envelope for the audio driver 1715 can send or receive audio data via the network 1705A or from the host speaker/microphone 1705B. The envelope for the audio driver 1715 can send or receive audio data via the network 1705A or from the host speaker/microphone 1705B. The envelope for the mouse or game controller driver 1716 can send or receive audio data via the network 1706A or from the host mouse or game controller 1706B. Whether commands from the game core logic 1720 are sent to the local API or over the network depends in part on whether the game application is executed on the server or the client.When the game application is executed on one or more cloud servers or edge servers, the graphics processing for the game is executed on one or more servers, and the rendered output is transmitted to the cloud game client via the network . The remotely and/or locally cached container data can store application data, configuration data, and/or save game data. The configuration data for the game can be stored on one or more servers, or can be stored on the client. In one embodiment, a subset of the client folder can be voluntarily mapped to the server by the user to allow the user to store a subset of game data, such as configuration data, or save game data on the client, and make the data available Remotely executed game application access. Such folders can be automatically or manually synchronized between the client and the server, allowing remote saving or configuration files to be accessed locally, or allowing local saving or configuration files to be backed up remotely.When a game application is executed on a high-performance client, at least a part of the instructions in the game server stack can be transmitted to the client and executed locally on the client. The envelope for one or more graphics APIs 1711 receives commands from the game core logic 1720 and sends these commands to one or more GPUs on the high-performance client. Depending on the game and system configuration, access to the file system is relayed by the envelope for the file system 1713 to the container image containing the game application data or the file system of the high-performance client. As with remotely executed games, part of the game files (such as configuration data and saved game data) can be stored on the client, stored on the server, or synchronized between the client and the server. In one embodiment, the output of one or more GPUs may be directly presented to the display window on the client. The output can also be encoded by the game server stack and transmitted to a web browser or streaming client application executing on the client. The data transmission between the game server stack and the browser/streaming client can be via inter-process communication or via a virtual network connection on the high-performance client. In one embodiment, the output of one or more GPUs may be encoded and transmitted to a networked streaming client connected to a high-performance client.As shown in FIG. 17B, the method 1750 for encapsulating cloud-based games includes operations for importing applications into storage associated with the cloud gaming system (1752). Importing the application can occur on the cloud server when the application is integrated into the cloud gaming system. In one embodiment, the import application may occur on the client device, so that the user can import locally stored games to realize remote execution of the game via the server of the cloud game system. The method further includes encapsulating the application in an encapsulation layer, where the encapsulation layer may be configured to implement selective execution of the application by a server device of the cloud gaming system or a client device of the cloud gaming system (1754). In one embodiment, the encapsulated application includes core logic and multiple encapsulations associated with the encapsulation layer. The encapsulation layer is configured to selectively relay API commands made by the core logic. The method additionally includes mapping an application via an encapsulation layer for execution by a processing resource selected from a set of processing resources, the set of processing resources including processing resources of a server device of the cloud game system and processing resources of a client device of the cloud game system ( 1756). The method further includes executing, via the encapsulation layer, an application on the processing resources mapped via the encapsulation layer (1758). After the resources on the client are mapped via the encapsulation layer, the application can be executed on the server of the cloud game system or on the client device of the cloud game system via the encapsulation layer.FIG. 18 illustrates an exemplary server 1800 according to an embodiment. The server 1800 shown represents one embodiment, while the configuration may be different in other embodiments. The server 1800 can be used as the GPU server described herein, and includes a non-volatile memory (NVM 1819), a system memory (MEM 1821), a collection of central processing units (CPU 1823, CPU 1825), and a collection of graphics processing units ( GPU 1829, 1831, 1833). The collection of central processing units can execute a server operating system (Server OS 1817). The server operating system can communicate with the compatibility runtime framework 1807, which provides a software execution environment that implements the execution of the software associated with the node agent 1803. The node agent 1803 communicates with the orchestration master device 1505, and includes a container runtime 1805 that facilitates the execution of the game application pod. The orchestration master 1505 can manage the life cycle of the game via the control of the container associated with the game application. The game application pod executed at the runtime 1805 via the container is related to the game server stacks 1606 and 1624 in FIG. 16. The container provides a consistent packaging mechanism for game applications, configurations, and dependencies.The containerized game application can be executed by the server OS 1817 via the runtime framework 1807 without using a hypervisor. Multiple containerized game applications (for example, games 1809A-1809N) can be executed concurrently, where API commands issued by games 1809A-1809N are managed and filtered through a thunk layer 1811A-1811B, and a thunk layer 1811A- 1811B is associated with the API encapsulation layer shown in FIG. 17A. In one embodiment, the form conversion layer implements isolation between various games executed by the server 1800. For example, when games 1809A-1809N are used to use standard operating system APIs, the form conversion layer 1811A-1811B provides alternative implementations of these libraries. In addition, if the game 1809A-1809N attempts to access a file on the local file system, the file access can be redirected towards the cloud-based file system. For example, game access to the local keyboard can actually be serviced by the remote keyboard.In addition, when the user wants to play the game 1809A-1809N from different machines, the game can be started by using the same saved game data because the data is stored in the cloud. File system reads/writes can be redirected to a central location. In one configuration, some files will come from remote storage, while other files can be stored locally. Progressive downloads can be used to introduce assets as needed. While the asset is being downloaded, the game can be executed remotely and streamed to the client.Figure 19 illustrates a hybrid file system 1900 that can be used to achieve a consistent gaming experience for locally executed games and remotely executed games. In one embodiment, the cloud group storage 1901 may be used to store game and/or system registry data 1905, game save data 1903, and user profile data 1909. Regardless of whether the game process 1907 is executed by a cloud server, an edge server or a high-performance client, the data in the cloud group storage 1901 is universally accessible to the game process 1907. Based on the user profile data 1909, the game save data 1903 and the game and/or system registry data 1905 can be mapped to the game process 1907 executed by the user. The remote synchronization 1913 can be used to enable the game asset 1911 to be remotely synchronized from a content delivery network (CDN) to a local storage (for example, a local SSD 1915), and be accessed by the game process 1907. In one embodiment, the local SSD 1915 may be flash memory or other non-volatile memory dedicated to or directly coupled to the graphics processor. When a remote server is provisioned to execute a game, remote synchronization 1913 can be executed. It is also possible to perform remote synchronization 1913 in the background to a high-performance client during a remote game session. After the client terminates the remote game session, the remote synchronization 1913 can continue with a higher priority. In one embodiment, the local SSD 1915 may be a GPU-attached SSD that is directly connected to the graphics processor device.Figure 20 illustrates a cloud gaming system 2000 in which command streams from multiple games can be combined into a single context. Rendering work can be scheduled to minimize jitter in frame production. Resources can be shared between game instances. The concept can also be enhanced by using non-volatile memory on the GPU.For example, the game process 2001 may communicate with the 3D API scheduler process (for example, the direct X scheduler process 2009) via the form conversion layer 2003 using the first context (Ctx 1). The additional game process 2005 can communicate with the 3D API via the form conversion layer 2007 using the second context (Ctx 2). By using the third context (Ctx 0), the 3D API scheduler can aggregate commands from different games into a single context on GPU 2013. By using a single context, the GPU can render as multiple render targets 2015, 2017, where each render target is associated with a separate game. Combining multiple games into a single context can be performed via operations at the form conversion layer, which can add an additional layer of logic and abstraction to the 3D API scheduler process. The cloud gaming system 2100 of FIG. 21 illustrates that this concept can be extended to enable multiple servers to share a network-attached GPU.FIG. 21 illustrates a cloud gaming system 2100 including GPU sharing across multiple server devices. The GPU sharing shown in Figure 20 can be extended beyond one server, which enables non-GPU servers 2101A-2101K to use network-attached GPUs 2121, 2123 in a data center. Commands (Gfx API 2113, 2117) from non-GPU servers 2101A-2101K for the mastered game process (game process 2103, game process 2105, game process 2109 to game process 2111, etc.) can be streamed to the network attachment Connected GPU 2121, 2123. A cluster scheduler 2127 is used, which has an accurate knowledge of the resources residing on each GPU 2121, 2123. The cluster scheduler performs real-time routing of drawing commands (for example, Gfx API 2113, 2117). Each frame can be rendered on a different GPU. For the generation of encoded video 2119, a single video encoding context can be shared across GPU clusters. The encoded video 2119 may be encoded in various formats described herein. GPU and video encoding performance can be dynamically adjusted based on WebRTC 2125 API.FIG. 22 illustrates a cloud gaming system 2200 including end-to-end path optimization. In one embodiment, the cloud gaming system 2200 may utilize WebRTC (Real Time Communication), and WebRTC may be used to achieve end-to-end path optimization. You can make WebRTC available on all endpoints (thin clients, all browsers). Since the network changes dynamically, it is important to achieve real-time response under critical network conditions. Using Wi-Fi as the last-meter delivery mechanism is the biggest culprit for changes in network conditions. Various options are available for using WebRTC to optimize cloud gaming solutions.In one embodiment, the GPU 2201 may include a rendering target 2203, and the graphics pipeline 2205 writes frame data for the game into the rendering target 2203. The data of the rendering target 2203 may be encoded by the encoder 2207 and written into the system memory 2208 as video data bits 2209. The WebRTC engine 2213 executed by the CPU 2211 may provide the prompt 2231 back to the encoder 2207 to optimize the encoding process based on the WebRTC network feedback processed by the WebRTC engine 2213. The encoded video bits 2209 may be transmitted to the user's Internet Service Provider (ISP) (e.g., ISP Internet 2229) through the Internet (e.g., core Internet 2217) via a network interface controller (NIC 2215). Then, the data is relayed to the home network (for example, home Wi-Fi 2225) through the last meter network 2227. In the case of using a wireless network, the wireless network data can be processed by the Wi-Fi driver 2223 on the client computing device, the Wi-Fi driver 2223 can relay the data to the web browser 2219, and the web browser 2219 acts as a streaming client End for cloud gaming services. The web browser 2219 may include WebRTC 2221 logic, and the WebRTC 2221 logic may provide web feedback to the WebRTC engine 2213 through the feedback path 2233.In one embodiment, the WebRTC network feedback is enhanced by using signals from the Wi-Fi driver 2223. In addition, reinforcement learning can be used to model the path to each client's home network, so that each client receives streaming data through an optimized path. Using Wi-Fi 6 can also help predictability. Cloud gaming service logic can be added to the access points to enhance the predictability and metric collection associated with these access points. In the case of using a 5G network, hooks can be added to the 5G control plane to implement quality of service technology. In addition, the system can be configured to utilize network prompts. Slice-based coding and dynamic resolution changes can be used based on tips about network health.Figures 23A-23B illustrate methods 2300, 2310 for configuring local execution or remote execution of cloud-based games. Figure 23A illustrates a method 2300 for remote execution of cloud-based games on a client device. Figure 23B illustrates a method 2310 for configuring local execution of a cloud-based game on a client device.As shown in FIG. 23A, the method 2300 includes an operation for receiving a selection of a game to be played through the cloud gaming system (block 2301). The selection can be received at the browser application or streaming client application and transmitted to the cloud gaming system via the network. At the client or at the server device, the cloud gaming system may determine a set of locally available clients (block 2302). The collection of locally available clients may be local clients registered with a user profile, such as a collection of devices that have previously been used to connect to the cloud gaming system. The set of client devices may also include a client that is discoverable and accessible by the computing device through a local network, and the user is executing the game streaming client from the computing device. The method 2300 additionally includes operations for determining whether a locally available client can perform locally executed operations. If a locally available client has a graphics processor with sufficient processing power for the selected game and sufficient available storage for the game, the client may be considered capable of local execution. If it is determined that the locally available client is capable of local execution (block 2303), the method 2300 may proceed to transition to local execution ("Yes", 2304), which is described in detail by method 2310 in FIG. 23B. Otherwise, the client may configure the game for remote execution (block 2305). The client can then initiate remote execution of the game (block 2306).When remote execution is configured, the cloud game client can perform operations to enable the server to map selected client resources at the server through the encapsulation layer, so that the server can access any client-based resources that will be accessed by the server. Mapping selected client resources to the server enables, for example, audio input received at the client device and keyboard/mouse/controller input provided at the client device to be relayed to the server. In one embodiment, the network port associated with the client device may be mapped to the network port on the remote server. The network output generated by the game (such as telemetry data for racing games) can be relayed to the cloud game client for consumption by software-based accessories, which can be configured to display or perform actions based on telemetry data. For games that use shared memory to output telemetry data, the memory buffer can be associated with the cloud game client, and the data in the buffer can be synchronized with the buffer on the remote server. For example, if any client-based file is to be used by a server-based game application, the cloud game client can also perform operations to cause the server to cache selected client resources on the server. For example, game configuration data (such as key mapping or input device configuration) stored on the client can be cached on the server and used to configure the execution of the game. Then, the cloud game client can perform operations to start the game on the server and stream the output to the client application of the cloud game system.During remote execution, the client may provide feedback to the server to implement adjustments to the streaming attributes used by the server (block 2307). The feedback can be WebRTC network feedback, as detailed in Figure 22, if the client is connected to the network via a wireless connection, the feedback includes feedback from the Wi-Fi driver. The feedback can include metrics, including but not limited to round-trip latency and packet loss. As network latency increases, the server can take steps to reduce the amount of time required to render a frame. The adjustment of streaming attributes may include adjusting the processor frequency of the graphics processor assigned to execute the game application. The adjustment of streaming attributes may also include dynamic adjustment of the rendering settings of the game. The encoding properties of the video encoding used for game output can also be adjusted.As shown in FIG. 23B, when local execution is enabled, the cloud game client can execute method 2310. The method 2310 includes mapping the determined resources of the local client to the encapsulation layer of the game application for the client (block 2313). The mapping of client resources to the encapsulation layer includes operations for mapping server resources to the client so that the client can access server-based resources, such as a file system container including a game server stack. For example, a hybrid file system 1900 as in FIG. 19 may be configured. The cloud gaming client may then perform operations to download game data at the determined local client (block 2314). This operation may include caching the selected server resources on the client.When game data is being downloaded, the encapsulation layer for different instances of the game can be configured to enable remote execution of the game. Then, by streaming instances of remote execution of the game until the game is ready for local execution, remote game play can begin immediately (block 2315). Initially, the streaming game can achieve a quick start of the game via remote execution, while the game assets are synchronized to the client device. During the initial remote execution phase, feedback (eg, WebRTC, etc.) may be provided to the server. Once the game is ready for local execution, game play can transition to local execution while retaining the saved game state generated during remote execution (block 2316). In one embodiment, after the local execution is ready, the transition can occur the next time the game is launched. In one embodiment, some games may be configured to transition from a remotely executed instance to a locally executed instance at runtime. For other games, restart or exit and restart the game. Once the local transition is performed, the output of the locally executed game can be streamed to the streaming client (block 2317), which can be on the same computing device that executes the game, or it can be a different one on the same local network Streaming device.Additional exemplary computing deviceFIG. 24 is a block diagram of a computing device 2400 including a graphics processor 2404 according to an embodiment. Each version of the computing device 2400 may be a communication device or may be included in a communication device, such as a set-top box (for example, an Internet-based cable TV set-top box, etc.), a global positioning system (GPS)-based device, and the like. The computing device 2400 may also be a mobile computing device or may be included in a mobile computing device, such as a cellular phone, a smart phone, a personal digital assistant (PDA), a tablet computer, a laptop computer, an e-reader, a smart TVs, TV platforms, wearable devices (for example, glasses, watches, necklaces, smart cards, jewelry, clothing, etc.), media players, etc. For example, in one embodiment, the computing device 2400 includes a mobile computing device that uses an integrated circuit ("IC"), such as a system on a chip ("SoC" or "SOC"), which integrates various components of the computing device 2400 The hardware and/or software components are integrated on a single chip. The computing device 2400 may be a computing device such as the data processing system 100 in FIG. 1, and may be used as a client and/or server element of the cloud gaming system described herein.The computing device 2400 includes a graphics processor 2404. Graphics processor 2404 represents any graphics processor described herein. In one embodiment, the graphics processor 2404 includes a cache 2414, which may be a single cache, or may be divided into multiple segments of cache memory, the cache 2414 includes, but is not limited to, any number of L1 caches , L2 cache, L3 cache or L4 cache, rendering cache, depth cache, sampler cache, and/or shader unit cache. In one embodiment, the graphics processor 2404 includes control and scheduling logic 2415. The control and scheduling logic 2415 may be firmware executed by a microcontroller within the graphics processor 2404. The graphics processor 2404 also includes a GPGPU engine 2444, which includes one or more graphics engines, graphics processor cores, and other graphics execution resources as described herein. Such graphics execution resources can be presented in the form including but not limited to the following: execution unit, shader engine, fragment processor, vertex processor, streaming multiprocessor, graphics processor cluster, or suitable for processing graphics Resources or image resources or any collection of computing resources that perform general computing operations in heterogeneous processors. The processing resources of the GPGPU engine 2444 may be included in multiple chips connected to the hardware logic of the substrate, as shown in FIGS. 11B-11D. The GPGPU engine 2444 may include a GPU slice 2445, which includes graphics processing and execution resources, caches, samplers, and the like. The GPGPU engine 2444 may further include one or more special slices 2446, and the one or more special slices 2446 include, for example, non-volatile memory 2416, network processing resources 2417, and/or general computing resources 2418.As illustrated, in one embodiment, in addition to the graphics processor 2404, the computing device 2400 may further include any number and type of hardware components and/or software components, including but not limited to an application processor 2406, a memory 2408 , And input/output (I/O) source 2410. The application processor 2406 may interact with the hardware graphics pipeline as illustrated with reference to FIG. 3A to share graphics pipeline functions. The processed data is stored in a buffer in the hardware graphics pipeline, and state information is stored in the memory 2408. The resulting data can be passed to a display controller for output via a display device such as display device 318 in FIG. 3A. The display device may have various types, such as a cathode ray tube (CRT), a thin film transistor (TFT), a liquid crystal display (LCD), an organic light emitting diode (OLED) array, etc., and the display device may be configured to display via a graphical user interface The user displays information.The application processor 2406 may include one or more processors, such as the processor(s) 102 of FIG. 1, and may be a central processing unit used at least in part to execute an operating system (OS) 2402 of the computing device 2400 (CPU). The OS 2402 may serve as an interface between the hardware and/or physical resources of the computing device 2400 and one or more users. The OS 2402 may include driver logic for various hardware devices in the computing device 2400. The driver logic may include graphics driver logic 2422, and the graphics driver logic 2422 may include the user mode graphics driver 1026 and/or the kernel mode graphics driver 1029 of FIG. The OS 2402 may also include a cloud game manager 2432, which may be an application, library, and/or framework that implements hybrid execution of cloud-based game applications.It is conceivable that, in some embodiments, the graphics processor 2404 may exist as part of the application processor 2406 (such as as part of a physical CPU package). In this case, at least part of the memory 2408 may be used by the application processor 2406. The processor 2406 and the graphics processor 2404 are shared, but at least part of the memory 2408 may be exclusive to the graphics processor 2404, or the graphics processor 2404 may have separate storage of the memory. The memory 2408 may include a pre-allocated area of the buffer (for example, a frame buffer); however, those of ordinary skill in the art should understand that the embodiment is not limited to this, and any memory that can be accessed by a lower graphics pipeline may be used. The memory 2408 may include various forms of random access memory (RAM) (eg, SDRAM, SRAM, etc.) including applications that utilize the graphics processor 2404 to render desktop or 3D graphics scenes. A memory controller hub (such as the memory controller 116 of FIG. 1) can access data in the memory 2408 and forward it to the graphics processor 2404 for graphics pipeline processing. The memory 2408 may become available to other components within the computing device 2400. For example, any data (for example, input graphics data) received from various I/O sources 2410 of the computing device 2400 is operated by one or more processors (for example, the application processor 2406) in a software program or application implementation. It can be temporarily queued to the memory 2408 before. Similarly, the software program determines that data that should be sent from the computing device 2400 to an external entity through one of the computing system interfaces or should be stored in an internal storage element is usually temporarily queued in the memory before being transferred or stored. In 2408.I/O sources can include devices such as touch screens, touch panels, touch pads, virtual or conventional keyboards, virtual or conventional mice, ports, connectors, network devices, etc., and can be accessed via the platform controller as shown in FIG. 1 The hub 130 attaches. In addition, the I/O source 2410 may include one or more I/O devices (for example, networked adapters) for transferring data to and/or transferring data from the computing device 2400; or for the computing device 2400 Large-scale non-volatile storage within (e.g., hard drives). User input devices including alphanumeric and other keys can be used to communicate information and command selections to the graphics processor 2404. Another type of user input device is a cursor control, such as a mouse, trackball, touch screen, touch pad, or cursor direction keys, used to transfer direction information and command selections to the GPU and used to control cursor movement on the display device. The camera and microphone array of the computing device 2400 can be used to observe gestures, record audio and video, and receive and transmit visual and audio commands.The I/O source 2410 configured as a network interface can provide access to networks such as LAN, wide area network (WAN), metropolitan area network (MAN), personal area network (PAN), Bluetooth, cloud network, cellular or Mobile networks (for example, third generation (3G), fourth generation (4G), etc.), intranet, Internet, etc. The network interface(s) may include, for example, a wireless network interface having one or more antennas. The network interface(s) may also include, for example, a wired network interface to communicate with a remote device via a network cable. The network cable may be, for example, an Ethernet cable, a coaxial cable, an optical fiber cable, a serial cable, or a parallel cable.The network interface(s) may provide access to the LAN, for example, by complying with the IEEE 802.11 standard, and/or the wireless network interface may provide access to the personal area network, for example, by complying with the Bluetooth standard. It can also support other wireless network interfaces and/or protocols, including previous and subsequent versions of the standard. In addition to communication via the wireless LAN standard or instead of communication via the wireless LAN standard, the network interface(s) may use, for example, the following protocols to provide wireless communication: Time Division Multiple Access (TDMA) protocol, Global System for Mobile Communications (GSM) protocol, code Division Multiple Access (CDMA) protocol and/or any other type of wireless communication protocol.It should be understood that for certain implementations, systems equipped with fewer or more than the examples described above may be preferred. Therefore, depending on a variety of factors, the configuration of the computing device 2400 may vary from implementation to implementation, such as price constraints, performance requirements, technological improvements, or other conditions. Examples include (but are not limited to) mobile devices, personal digital assistants, mobile computing devices, smart phones, cellular phones, cell phones, one-way pagers, two-way pagers, messaging devices, computers, personal computers (PC), desktop computers, laptops Type computer, notebook computer, handheld computer, tablet computer, server, server array or server farm, web server, network server, Internet server, workstation, small computer, mainframe computer, super computer, network device, web device, distributed computing Systems, multi-processor systems, processor-based systems, consumer electronics, programmable consumer electronics, televisions, digital TVs, set-top boxes, wireless access points, base stations, subscriber stations, mobile subscriber centers, radio network controllers, Routers, hubs, gateways, bridges, switches, machines, or combinations thereof.Embodiments can be implemented as any one or a combination of the following: one or more microchips or integrated circuits interconnected using a motherboard, hard-wired logic, stored by a memory device and executed by a microprocessor Software, firmware, application specific integrated circuit (ASIC) and/or field programmable gate array (FPGA). As an example, the term "logic" may include software or hardware and/or a combination of software and hardware.The embodiments may be provided as, for example, a computer program product, which may include one or more machine-readable media having machine-executable instructions stored thereon, and these machine-readable media Executable instructions, when executed by one or more machines (such as a computer, a network of computers, or other electronic devices), can cause the one or more machines to perform operations according to the embodiments described herein. Machine-readable media can include, but are not limited to: floppy disks, optical disks, CD-ROM (compact disk read-only memory) and magneto-optical disks, ROM, RAM, EPROM (erasable programmable read-only memory), EEPROM (electrically erasable Except programmable read-only memory), magnetic or optical cards, flash memory, or other types of non-transitory machine-readable media suitable for storing machine-executable instructions.In addition, the embodiments may be downloaded as a computer program product, wherein, via a communication link (for example, a modem and/or network connection), by means of being embodied in a carrier wave or other propagation medium and/or modulated by a carrier wave or other propagation medium One or more of the data signals can transfer the program from the remote computer (for example, the server) to the requesting computer (for example, the client).Reference to "one embodiment" or "an embodiment" herein means that a specific feature, structure, or characteristic described in conjunction with the embodiment can be included in at least one embodiment of the present invention. The appearances of the phrase "in one embodiment" in different places in this specification do not necessarily all refer to the same embodiment. The process depicted in the attached drawings can be implemented by processing logic including hardware (for example, circuits, dedicated logic, etc.), software (for example, instructions on a non-transitory machine-readable storage medium), or a combination of both hardware and software. implement. Reference will be made in detail to various embodiments, examples of which are illustrated in the drawings. In the following detailed description, numerous specific details are set forth in order to provide a complete understanding of the present invention. However, it will be apparent to those of ordinary skill in the art that the present invention can be practiced without these specific details. In other instances, well-known methods, procedures, components, circuits, and networks have not been described in detail, so as not to unnecessarily obscure aspects of the embodiments.It should also be understood that although the terms first, second, etc. may be used to describe various elements herein, these elements should not be limited by these terms. These terms are only used to distinguish between elements. For example, the first contact may be referred to as the second contact, and similarly, the second contact may be referred to as the first contact without departing from the scope of the present invention. Both the first contact and the second contact are contacts, but they are not the same contact.The terminology used herein is only for the purpose of describing specific embodiments, and is not intended to be limiting for all embodiments. As used in the specification and appended claims of the present invention, the singular form "a, an" and "the" are intended to also include the plural form, unless the context clearly dictates otherwise. It will also be understood that the term "and/or" used herein refers to and encompasses any one and all possible combinations of one or more of the associated listed items. It will also be understood that when the term "comprise" ("comprise" and/or "comprising") is used in this specification, it designates the presence of stated features, integers, steps, operations, elements and/or components, but does not The existence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof are excluded.As used herein, depending on the context, the term "if" can be understood to mean "when" or "after" or "in response to determination" or "in response to detection." Similarly, depending on the context, the terms "if determined" or "if detected [state or event]" can be understood to mean "after determining..." or "in response to determining" or "after detecting [ After the stated condition or event]" or "in response to the detection of [the stated condition or event]".Described in this article is a cloud-based game system, in which the graphics processing operations of the cloud-based game can be executed on the client device. The client-based graphics processing may be activated in response to determining that the client includes a graphics processor with a performance exceeding a minimum threshold. When the game is executed remotely and streamed to the client, the client can be configured to provide network feedback, which can be used to adjust the execution and/or coding for the game.An embodiment provides a non-transitory machine-readable medium storing instructions for causing one or more processors of an electronic device to perform operations, the operations including: determining one or more graphics processors of the electronic device Capabilities; in response to determining that one or more capabilities of the graphics processor exceed the threshold, implement at least a part of the local execution of the graphics processing operations of the game application associated with the cloud-based game service; the retrieval is controlled by the cloud-based game service At least part of the cloud-based game; and one or more graphics processing operations of the cloud-based game are performed via the graphics processor of the electronic device. In one embodiment, the operation further includes: when at least a part of the cloud-based game hosted by the cloud-based game service is retrieved, receiving an output stream of an instance of remote execution of the cloud-based game, the output stream being based on The real-time communication metrics of the network are adjusted based on it.One embodiment provides a system that includes the non-transitory machine-readable medium described above.An embodiment provides a method including: mapping an application via an encapsulation layer for execution by a processing resource selected from a set of processing resources, the set of processing resources including processing resources of a server device of a cloud game system and a cloud The processing resources of the client device of the game system; the application is executed on the processing resources mapped via the encapsulation layer via the encapsulation layer; and the output of the execution of the application is streamed to the client application of the cloud game system. The method further includes: importing the application into a storage associated with the cloud gaming system and encapsulating the application in an encapsulation layer. The encapsulation layer may be configured to implement selective execution of applications by the server device of the cloud gaming system and the client device of the cloud gaming system. In one embodiment, the encapsulated application includes core logic and multiple encapsulations associated with the encapsulation layer. The encapsulation layer is configured to selectively relay API commands made by the core logic. The multiple packages associated with the encapsulation layer include: file system encapsulation, input device encapsulation, graphics programming interface encapsulation, audio device encapsulation, and system interface encapsulation. Other types of encapsulation can also be performed. In one embodiment, mapping the application via the encapsulation layer includes mapping the encapsulation to a resource selected from a resource set, the resource set including resources of a host device or resources of a remote device.In a further embodiment, the method includes: mapping the encapsulation layer of the first instance of the application for execution by the client of the cloud gaming system; and initiating the transmission of data associated with the first instance of the application to the client of the cloud gaming system End; mapping the encapsulation layer of the second instance of the application for execution by the server of the cloud gaming system; and initiating the execution of the second instance of the application on the server of the cloud gaming system. Streaming the output of the execution of the application to the client application of the cloud gaming system may include streaming the output of the second instance of the application during the transmission of data associated with the first instance of the application. During the execution of the second instance of the application, the client device may provide network feedback to the server of the cloud gaming system. In one embodiment, after the transmission of the data associated with the first instance of the application is completed, the execution of the first instance of the application on the client of the cloud gaming system is initiated, and the execution of the first instance of the application is streamed Client application to cloud gaming system. The first instance of the application may be executed on the first client of the cloud gaming system, and the client application of the cloud gaming system may be executed on the second client of the cloud gaming system.An embodiment provides a method for execution on an electronic device, wherein the method includes: determining one or more capabilities of a graphics processor of the electronic device; in response to determining that one or more capabilities of the graphics processor exceeds a threshold, implementing Local execution of at least part of the graphics processing operations of the game application associated with the cloud-based game service; retrieval of at least part of the cloud-based game hosted by the cloud-based game service; and execution via the graphics processor of the electronic device One or more graphics processing operations for cloud-based games. In a further embodiment, determining one or more capabilities of the electronic device includes determining the capabilities of the graphics processor of the client device and the network associated with the client device. Determining the capabilities of the graphics processor of the client device includes determining the amount of memory associated with the graphics processor of the client device or determining the bandwidth associated with the memory. Determining one or more capabilities of the graphics processor may also include determining the fill rate of the graphics processor. Determining the capability of the network associated with the client device includes determining the network waiting time between the electronic device and the server of the cloud gaming system. Retrieving at least a part of the cloud-based game hosted by the cloud-based game service includes: mapping the resources of the server of the cloud game system to the electronic device, and caching the selected resources of the server on the electronic device. The selected resources of the server of the cloud gaming system include executable logic associated with the cloud-based game and one or more assets associated with the cloud-based game. The method may additionally include: when at least a part of the cloud-based game hosted by the cloud-based game service is retrieved, receiving an output stream of the remotely executed instance of the cloud-based game. After the selected resources of the server are cached on the electronic device, the execution of the cloud-based game can be transformed from a remotely executed instance to a locally executed instance. In one embodiment, the transition from remote execution to local execution can be performed without exiting or restarting the cloud-based game.One embodiment provides a method, the method comprising: determining to perform remote execution of a cloud-based game; determining the latency sensitivity of the cloud-based game; and determining to use a selection from a set of remote execution resources based on the latency sensitivity The remote execution resources to execute the game, the remote execution resources include cloud-based servers and edge servers, where cloud-based servers are selected for cloud-based games that are sensitive to waiting time, and edge servers are selected for waiting time Sensitive game. The method additionally includes receiving a stream of output of the remote execution of the cloud-based game from the remote execution resource.In various embodiments, determining the latency sensitivity of the cloud-based game includes determining the latency sensitivity category assigned to the cloud-based game. In addition, determining to execute the remote execution of the cloud-based game may include: determining to execute at least a part of the graphics processing operation for the cloud-based game on the client computing device; and mapping the container containing the resources of the cloud-based game to the client The file system of the computing device; and transferring the resources in the container to the client computing device. The method further includes: when transferring the resources in the container, performing at least a part of the graphics processing operations for the cloud-based game at the cloud-based server or the edge server, and the processing to be performed at the cloud-based server or the edge server The output of the graphics processing operation is streamed to the client device. Performing at least a part of the graphics processing operations for the cloud-based game at the cloud-based server or edge server may include mapping resources on the client computing device to the cloud-based server or edge server. The mapped resources on the client device may include data dedicated to cloud-based games.One embodiment provides a non-transitory machine-readable medium storing instructions for causing one or more processors of an electronic device to perform the method described herein.One embodiment provides a system that includes: one or more processors including a graphics processor; and a memory device that stores instructions for executing the method described herein.Those skilled in the art will appreciate from the foregoing description that the wide range of techniques of the embodiments can be implemented in various forms. Therefore, although the embodiments have been described in conjunction with specific examples thereof, the true scope of the embodiments should not be limited thereto, because after studying the drawings, the specification and the appended claims, other modifications will become apparent to those skilled in the art . |
Self-aligning fabrication methods for forming memory access devices comprising a doped chalcogenide material. The methods may be used for forming three-dimensionally stacked cross point memory arrays. The method includes forming an insulating material over a first conductive electrode, patterning the insulating material to form vias that expose portions of the first conductive electrode, forming a memory access device within the vias of the insulating material and forming a memory element over the memory access device, wherein data stored in the memory element is accessible via the memory access device. The memory access device is formed of a doped chalcogenide material and formed using a self-aligned fabrication method. |
CLAIMS What is claimed as new and desired to be protected by Letters Patent of the United States is: I. A method of forming a memory device comprising: forming an insulating material over a first conductive electrode; patterning the insulating material to form vias that expose portions of the first conductive electrode; forming a memory access device within the vias of the insulating material; and forming a memory element over the memory access device, wherein data stored in the memory element is accessible via the memory access device, wherein the memory access device is formed of a doped chalcogenide material, and the memory access device is formed using a self-aligned fabrication method. The method of claim 1, wherein the doped chalcogenide material comprises one of the group consisting of a Cu-doped combination of Se and/or Te alloyed with one or more of Sb, In and Ge. The method of claim 1, wherein the doped chalcogenide material comprises one of the group consisting of a Ag-doped combination of Se and/or Te alloyed with one or more of Sb, In and Ge. The method of claim 1, wherein the self-aligned fabrication method for forming the memory access device further comprises depositing the doped chalcogenide material using electrochemical deposition. The method of claim 1, wherein the self-aligned fabrication method for forming the memory access device further comprises depositing the doped chalcogenide material using vapor phase deposition. WO 2011/084482 PCT/US2010/060508 The method of claim 4, wherein during the electrochemical deposition process, the doped chalcogenide material is only formed on the exposed portions of the first conductive electrode. The method of claim 1, wherein vias formed in the insulating material have a width of 40nm or less. The method of claim 1, wherein the self-aligned fabrication method for forming the memory access device occurs at temperatures at or below 400 °C. The method of claim 4, wherein forming a memory access device further comprises planarizing the electrochemically deposited doped chalcogenide material to a top surface of the insulating material. The method of claim 1, wherein the first conductive electrode is a word line. The method of claim 1, further comprising forming a second conductive electrode over the memory element. The method of claim 11, wherein the second conductive electrode is a bit line. The method of claim 11, wherein the memory device is a cross point memory. The method of claim 13, further comprising forming a plurality of repeated levels of individual memory devices, each repeated level comprising the first conductive electrode, the insulating material, the memory access device, the memory element and the second conductive electrode, wherein the cross point memory device comprises multiple levels of memory elements and memory access devices such that it is a three-dimensionally stacked memory device, and wherein each memory access devices is a select device for a corresponding memory element. WO 2011/084482 PCT/US2010/060508 A method of forming a memory device comprising: forming an insulating material over a first conductive electrode; patterning the insulating material to form vias that expose portions of the first conductive electrode; forming a memory access device within the vias of the insulating material using a self-aligned fabrication method; and forming a memory element over the memory access device, wherein data stored in the memory element is accessible via the memory access device. The method of claim 15, wherein the self-aligned fabrication method further comprises: depositing a chalcogenide material; depositing a dopant material on the chalcogenide material; and causing the chalcogenide material to become doped with the dopant material. The method of claim 16, wherein the chalcogenide material is deposited using vapor phase deposition. The method of claim 16, wherein the chalcogenide material is deposited using electrochemical deposition. The method of claim 16, wherein the dopant material is selectively deposited on the chalcogenide material using one of electrochemical deposition or physical vapor deposition of the dopant material. The method of claim 16, wherein the chalcogenide material is a combination of Se and/or Te alloyed with one or more of Sb, In and Ge. WO 2011/084482 PCT/US2010/060508 The method of claim 16, wherein the dopant material is one of Cu or Ag. The method of claim 18, wherein during the electrochemical deposition process, the chalcogenide material is only formed on the exposed portions of the conductive electrode. The method of claim 16, further comprising planarizing the dopant material and portions of the doped chalcogenide material extending above the vias in the insulating material. A method of forming a memory device comprising: forming an insulating material over a first conductive electrode; patterning the insulating material to form vias that expose portions of the first conductive electrode; forming a memory access device within the vias of the insulating material using a self-aligned fabrication method; and forming a memory element over the memory access device, wherein data stored in the memory element is accessible via the memory access device, wherein the self-aligned fabrication method further comprises: depositing a chalcogenide material; infusing the chalcogenide material with Ge; and depositing a dopant material on the Ge-infused chalcogenide material with a dopant material; and causing the Ge-infused chalcogenide material to become doped with the dopant material. The method of claim 24, wherein the chalcogenide material is deposited using vapor phase deposition. WO 2011/084482 PCT/US2010/060508 The method of claim 24, wherein the chalcogenide material is deposited using electrochemical deposition. The method of claim 24, wherein the chalcogenide material is infused with Ge using gas-cluster ion beam modification. The method of claim 24, wherein the dopant material is selectively deposited on the Ge-infused chalcogenide material using one of electrochemical deposition or physical vapor deposition of the dopant material. The method of claim 24, wherein the chalcogenide material is a combination of Se and/or Te alloyed with one or more of Sb and In. The method of claim 24, wherein the dopant material is one of Cu or Ag. The method of claim 24, wherein during the electrochemical deposition process, the chalcogenide material is only formed on the exposed portions of the conductive electrode. The method of claim 24, further comprising planarizing the dopant material and portions of the doped chalcogenide material extending above the vias in the insulating material. |
WO 2011/084482 PCT/US2010/060508 METHODS OF SELF-ALIGNED GROWTH OF CHALCOGENIDE MEMORY ACCESS DEVICE FIELD OF THE INVENTION [0001] Disclosed embodiments relate generally to memory devices and more particularly to methods of forming self-aligned, chalcogenide memoiy access devices for use in memory devices. BACKGROUND [0001] A non-volatile memory device is capable of retaining stored information even when power to the memory device is turned off. Traditionally, non-volatile memory devices occupied large amounts of space and consumed large quantities of power. As a result, non-volatile memory devices have been mainly used in systems where limited power drain is tolerable and battery-life is not an issue. [0002] One type of non-volatile memory device includes resistive memory cells as the memory elements therein. Resistive memoiy elements are those where resistance states can be programmably changed to represent two or more digital values (e.g., 1, 0). Resistive memory elements store data when a physical property of the memory elements is structurally or chemically changed in response to applied programming voltages, which in turn changes cell resistance. Examples of variable resistance memory devices include memory devices that include memory elements formed using, for example, variable resistance polymers, perovskite materials, doped amorphous silicon, phase-changing glasses, and doped chalcogenide glass, among others. Memory access devices, such as diodes, are used to access the data stored in these memory elements. FIG. 1 illustrates a general structure of a cross point type memory device. Memory cells are positioned between access lines 21, 22, for example word lines, and data/sense lines 11, 12, for example bit lines. Each memory cell typically includes a memory access device 31 electrically coupled to a memory element 41. 1 WO 2011/084482 PCT/US2010/060508 [0003] As in any type of memory, it is a goal in the industry to have as dense a memory array as possible; therefore, it is desirable to increase the number of memory cells in an array of a given chip area. In pursuing this goal, some memory arrays have been designed in multiple planes in three dimensions, stacking planes of memory cells above one another. However, formation of these three-dimensional structures can be very complicated and time consuming. One of the limiting factors in forming such three-dimensional memory structures is the formation of the memory access devices. Traditional methods may require several expensive and extra processing steps and may also cause damage to previously formed materials during formation of subsequent materials. [0004] Therefore, improved fabrication methods for forming memory access devices are desired. BRIEF DESCRIPTION OF THE DRAWINGS [0002] FIG. 1 illustrates a general structure of a cross point type memory device. [0003] FIG. 2A illustrates a cross-sectional view of a cross point memory device including a memory access device according to disclosed embodiments. [0004] FIG. 2B illustrates a top view of the cross point memory device of FIG. 2A. [0005] FIG. 3 A illustrates an alternative configuration of a cross-sectional view of a cross point memory device including a memory access device according to disclosed embodiments. [0006] FIG. 3B illustrates a top view of the cross point memory device of FIG. 3 A. [0007] FIGS. 4A-4D each illustrates a cross-sectional view of an intermediate step in the fabrication of a memory device in accordance with disclosed embodiments. [0008] FIGS. 5A and 5B are scanning electron microscope photographs showing example memory access devices formed by a disclosed embodiment. 2 WO 2011/084482 PCT/US2010/060508 [0009] FIGS. 6A and 6B each illustrates a cross-sectional view of an intermediate step in the fabrication of a memory device in accordance with disclosed embodiments. [0010] FIGS. 7A and 7B each illustrates a cross-sectional view of an intermediate step in the fabrication of a memory device in accordance with disclosed embodiments. [0011] FIG. 8 illustrates a processor system that includes a memory device having memory access devices according to a disclosed embodiment. DETAILED DESCRIPTION [0012] In the following detailed description, reference is made to the accompanying drawings which form a part hereof, and in which are shown by way of illustration specific embodiments that may be practiced. It should be understood that like reference numbers represent like elements throughout the drawings. These example embodiments are described in sufficient detail to enable those skilled in the art to practice them. It is to be understood that other embodiments may be utilized, and that structural, material, and electrical changes may be made, without departing from the scope of the invention, only some of which are discussed in detail below. [0013] According to disclosed embodiments, memory access devices for accessing memory elements of a memory cell are formed using self-aligning fabrication methods. Self-aligning fabrication techniques require fewer processing steps, and are thus more cost-effective, than many traditional methods, such as for example by reducing the number of masking steps required for fabrication. Self-aligned fabrication methods may also minimize the required contact area of the memory access device because they may provide superior fill capabilities. [0014] Moreover, the self-aligning methods of the disclosed embodiments allow easy three-dimensional stacking of multiple levels of memory arrays. One way in which this is possible is because the self-aligning fabrication methods are implemented at low-temperatures (e.g., 3 WO 2011/084482 PCT/US2010/060508 at .or below 400 °C). Low temperature formation facilitates three-dimensional stacking of multiple memory levels because it limits damage to previously formed levels. [0015] Additionally, according to the disclosed embodiments, the memory access devices are formed of Cu- or Ag- doped chalcogenide materials. Chalcogenide materials (doped with, e.g., nitride) are known in the art for use as a phase-change material for forming memory elements. However, it is also known that Cu- or Ag- doped chalcogenides, which act as electrolytes rather than as a phase-change material, are particularly suitable for use as memory access devices. In a Cu- or Ag- doped chalcogenide material, the metal "dopant" ions are mobile within the chalcogenide material. These "mobile" ions are what allows current to flow through the chalcogenide material when utilized as a memory access device. [0016] The use of Cu- or Ag- doped chalcogenide materials also provides desired benefits of high current density, e.g., greater than 106 A/cm2, and low threshold ON voltage (i.e., the minimum voltage required to "turn on" or actuate the device), e.g., less than IV. The behavior can be made to represent a diode-like select device. These aspects of a memory access device are important for appropriate operation of a high-density memory device. [0017] The memory access device 20 of the disclosed embodiments may be formed of any Cu- or Ag- doped chalcogenide material, including, for example, a Cu- or Ag-doped combination of Se and/or Te alloyed with one or more of Sb, In, Sn, Ga, As, Al, Bi, S, O and Ge. Specific examples of appropriate chalcogenide materials (e.g., chalcogenide alloys) (which are then doped with one of copper or silver) for use in the memory access devices of the disclosed embodiments include alloys of In-Se, Sb-Te, As-Te, Al-Te, Ge-Te, Ge-S, Te-Ge-As, In-Sb-Te, Te-Sn-Se, Ge-Se-Ga, Bi-Se-Sb, Ga-Se-Te, Sn-Sb-Te, Te-Ge-Sb-S, Te-Ge-Sn-O, Sb-Te-Bi-Se, Ge-Sb-Se-Te, and Ge-Sn-Sb-Te. [0018] FIGS. 2A and 2B illustrate an example of a cross point memory device 100 including memory access devices 20 formed in accordance with the disclosed embodiments. FIG. 2A illustrates a cross-sectional view of the cross point memory device 100 and FIG. 2B illustrates a 4 WO 2011/084482 PCT/US2010/060508 top-down view of the cross point memory device 100. A memory access device 20, an electrode 150 and a discrete memory element 140 are stacked at the intersection of the access lines 110, for example word lines, and the data/sense lines 120, for example bit lines, of the cross point memory device 100. Each discrete memory element 140 is accessed via the corresponding memory access device 20. Access lines 110 and data/sense lines 120 are formed of a conductive material, such as for example, aluminum, tungsten, tantalum or platinum, or alloys of the same. Suitable materials for electrode 150 include, for example, TiN, TaN, Ta, TiAIN and TaSiN. Memory element 140 may be formed of an appropriate variable resistance material including, for example, variable resistance polymers, perovskite materials, doped amorphous silicon, phase-changing glasses, and doped chalcogenide glass, among others. An insulating material 130, such as an oxide, fills the other areas of the memory device. [0019] FIGS. 3A and 3B illustrate cross-sectional and top-down views, respectively, of an alternative arrangement of a cross point memory device 200. In FIGS. 3 A and 3B, like elements are indicated by the same reference numerals from FIGS. 2A and 2B and are not described in detail. As can be seen in FIG. 3 A, memory element 240 is formed as a continuous layer instead of being formed as discrete elements, as in memory element 140 (FIG. 2A). This configuration further reduces the complexity of manufacturing as well as alignment problems between the memory element 140 and corresponding electrodes 150/memory access devices 20. [0020] Except for the formation of the memory access device 20, which is formed in accordance with the disclosed embodiments, the other elements of the cross point memory devices 100/200 (e.g., word lines, bit lines, electrodes, etc.) are formed using methods known in the art. An example method is now described; however any known fabrication methods may be used for the other elements of cross point memory devices 100/200. Access line 110 may be formed over any suitable substrate. The conductive material forming access lines 110 may be deposited with any suitable methodology, including, for example, atomic layer deposition (ALD) methods or plasma vapor deposition (PVD) methods, such as sputtering and evaporation, thermal deposition, chemical vapor deposition (CVD) methods, plasma-enhanced (PECVD) methods, and photo-organic deposition (PODM). Then the material may be patterned to form access lines 110 using 5 WO 2011/084482 PCT/US2010/060508 photolithographic processing and one or more etches, or by any other suitable patterning technique. Insulating material 130 is next formed over access lines 110. The insulating material 130 may be deposited and patterned by any of the methods discussed with respect to the access lines 110 or other suitable techniques to form vias at locations corresponding to locations where access lines 110 and data/sense lines 120 will cross. Memory access devices 20 are then formed in the vias in accordance with the disclosed embodiments. [0021] In the fabrication of memory device 100 (FIG. 2A/2B), after formation of memory access devices 20, an additional insulating material 130 may be formed over the memory access devices 20. This insulating material 130 is patterned to form vias at locations corresponding to the memory access devices 20 and electrodes 150 and memory elements 140 are deposited within the vias. Alternatively, material for forming electrodes 150 and memory elements 140 may be deposited above the memory access devices 20 and patterned to align with memory access devices 20, followed by deposition of additional insulating material 130 in vias formed by the patterning. After formation of the electrodes 150 and memory elements 140, the data/sense lines 120 are deposited and patterned by any of the methods discussed with respect to the access lines 110 or using other suitable techniques. [0022] In the fabrication of memory device 200 (FIG. 3A/3B), after formation of memory access devices 20, an insulating material 130 may formed over the memory access devices 20. This insulating material 130 is patterned to form vias at locations corresponding to the memory access devices 20 and electrodes 150 are deposited within the vias. Alternatively, a material for forming electrodes 150 may be deposited above the memory access devices 20 and patterned to align with memory access devices 20, followed by deposition of additional insulating material 130 in vias formed by the patterning. After formation of the electrodes 150, a memoiy element 240 is deposited with any suitable methodology. Then, the data/sense lines 120 are deposited and patterned by any of the methods discussed with respect to the access lines 110 or using other suitable techniques. 6 WO 2011/084482 PCT/US2010/060508 [0023] Alternatively, access lines 110 may be formed by first forming a blanket bottom electrode and then, after formation of the memory access devices 20 (as described below), a cap layer is formed over the memory access device and the blanket bottom electrode is patterned to form the access lines 110. [0024] It should be noted that while only a single-level cross point memory structure is illustrated in FIGS. 2A/2B and 3A/3B, multiple levels may be formed one over the other, i.e., stacked to form a three-dimensional memory array, thereby increasing memory density. [0025] The memory access device 20 of the disclosed embodiments may be formed by one of several self-aligning fabrication techniques, described below. [0026] Referring to FIGS. 4A - 4D, one method by which the memory access devices 20 of the disclosed embodiments may be formed is described. As seen in FIG. 4A, word line 110 and insulating material 130 are formed. This may be done, for example, by any suitable deposition methodology, including, for example, atomic layer deposition (ALD) methods or plasma vapor deposition (PVD) methods, such as sputtering and evaporation, thermal deposition, chemical vapor deposition (CVD) methods, plasma-enhanced (PECVD) methods, and photo-organic deposition (PODM). As seen in FIG. 4B, insulating material 130 is patterned to form vias 131 for memory access devices 20. This may be done, for example, by using photolithographic processing and one or more etches, or by any other suitable patterning technique. The vias 131 in insulating material 130 are formed to be at a sub-40nm scale. Next, as seen in FIG. 4C, a Cu- or Ag- doped chalcogenide material is deposited by electrochemical deposition. Suitable materials for deposition by this process include any Cu- or Ag-doped combination of Se and/or Te alloyed with one or more of Sb, In, Sn, Ga, As, Al, Bi, S, O and Ge, as previously discussed. The exposed portions 22 of word line 110 provide a source for reduction/deposition for the electrochemical deposition process. The deposited Cu- or Ag- doped chalcogenide material thereby forms memory access devices 20 with the "mushroom" cap 25 overrun of the deposition process shown in FIG. 4C. After the electrochemical deposition process, the "mushroom" caps 25 are planarized, using for example a chemical mechanical planarization process, resulting in the structure shown in FIG. 4D. After 7 WO 2011/084482 PCT/US2010/060508 planarizing, memory device 100/200 is completed by forming electrodes 150, memory elements 140/240 and bit lines 120 in accordance with known methods, as discussed above with respect to FIGS. 2A/2B and 3A/3B. [0027] FIG. 5 A illustrates a perspective view (scanning electron microscope) of an array of memory access devices 20 formed in accordance with this embodiment. FIG. 5B illustrates a cross-sectional view of a portion of the array shown in FIG. 5A. As can be seen in FIG. 5A, the memory access devices 20 (seen as "mushroom" caps 25 in FIG. 5A) are very reliably formed only in the desired row and column positions for forming a three-dimensionally stacked memory array. As can be seen in FIG. 5B, the contact fill (within the vias 131 in insulating material 130) is void-free, demonstrates long-range fill and the feature dimensions are at a scale that is sub-40nm. [0028] Using electrochemical deposition as a fabrication technique is inherently self-aligning because deposition occurs only on the exposed portions 22 of word lines 110. Further, using electrochemical deposition provides a bottom-up fill process because the exposed portions 22 of word line 110 are the only source for reduction during the electrochemical deposition process (e.g., deposition does not occur on the insulating material 130 located at the sides of opening 131). This results in a void-free contact fill of the high aspect ratio opening and thus a void-free memory access device 20. This process is able to be scaled to the desired sub-40nm feature dimensions because only ions in solution are required to get into the contact vias thereby growing the material being deposited, as opposed to using physical deposition techniques which require the material being deposited to fill the vias directly. [0029] Referring to FIGS. 4A, 4B, 6A, and 6B, another method by which the memory access devices 20 of the disclosed embodiments may be formed is described. Word line 110 and insulating material 130 are formed (FIG. 4A) and vias 131 are formed in insulating material 130 (FIG. 4B), as previously discussed. Then, as seen in FIG. 6A, a vapor phase deposition method is used to deposit a chalcogenide material 19 in vias 131 (FIG. 4B). Suitable materials for deposition by this process include any combination of Se and/or Te alloyed with one or more of Sb, In, Sn, Ga, As, Al, Bi, S, O and Ge, as previously discussed. After deposition of the chalcogenide 8 WO 2011/084482 PCT/US2010/060508 material 19, a dopant material 23 is deposited over chalcogenide material 19, seen in FIG. 6B. This may be done, for example, by electrochemical deposition or by vapor phase deposition of the dopant material 23. Dopant material 23 may be, for example, copper or silver. The chalcogenide material 19 is then doped with the dopant material 23 using, for example, an ultraviolet (UV) photodoping step. In UV photodoping, diffusion of metal atoms is photon-induced by directing electromagnetic radiation (e.g., UV light) at the metal (e.g., dopant material 23), resulting in diffusion of metal atoms from the metal into the chalcogenide material 19. Other suitable methods of doping the chalcogenide material 19 with ions from dopant material 23 may be used. The chalcogenide material 19 is thus doped with ions from dopant material 23, resulting in Cu- or Ag- doped chalcogenide material that forms memory access device 20. The dopant material 23 and the excess Cu- or Ag- doped-chalcogenide material 20 are planarized to the level of the top surface of insulating material 130, resulting in the structure illustrated in FIG. 4D. This may be done, for example, using chemical mechanical planarization (CMP), such as CuCMP in the case of a copper dopant material 23. After planarizing, memory device 100/200 is completed by forming electrodes 150, memory elements 140/240 and bit lines 120 in accordance with known methods, as discussed above with respect to FIGS. 2A/2B and 3A/3B. [0030] Referring to FIGS. 4A, 4B, 6B, 7A and 7B, another method by which the memory access devices 20 of the disclosed embodiments may be formed is disclosed. Word line 110 and insulating material 130 are formed (FIG. 4A) and vias 131 are formed in insulating material 130 (FIG. 4B), as previously discussed. Then, as seen in FIG. 7A, a chalcogenide material 19 is deposited in vias 131 (FIG. 4B) using an electrochemical deposition method. The deposition occurs as discussed above with respect to FIG. 4C. As described above, using an electrochemical deposition technique is inherently self-aligning because deposition occurs only on the exposed portions 22 of word lines 110 (FIG. 4B). In this embodiment, suitable materials for deposition include any combination of Se and/or Te alloyed with one or more of Sb, In, Sn, Ga, As, Al, Bi, S, and O, as previously discussed. Then, as shown in FIG. 7B, gas-cluster ion beam (GCIB) modification is used to infuse the chalcogenide material 19 with Ge. In gas-cluster ion beam (GCIB) modification, an accelerated gas-cluster ion beam including Ge is accelerated onto the 9 WO 2011/084482 PCT/US2010/060508 surface of the chalcogenide material 19 to infuse the Ge into the surface of the chalcogenide material 19. After infusion of Ge in the chalcogenide material 19, the Ge-infused chalcogenide material 19 is doped with a dopant material 23. This may be accomplished as previously described with respect to FIG. 6B. Then, the dopant material 23 and the excess Ge-infused, Cu- or Ag-doped-chalcogenide material 20 are planarized to the level of the top surface of insulating material 130, resulting in the structure illustrated in FIG. 4D. This may be done, for example, using chemical mechanical planarization (CMP), such as CuCMP in the case of a copper dopant material 23. After planarizing, memory device 100/200 is completed by forming electrodes 150, memory elements 140/240 and bit lines 120 in accordance with known methods, as discussed above with respect to FIGS. 2A/2B and 3A/3B. [0031] Alternatively to each of the above-described methods, a thicker insulating material 130 may be initially formed. In this instance, the electrochemical or vapor phase deposition of the Cu- or Ag- doped chalcogenide material 20 would not entirely fill the vias 131. Then, electrode 150 (and in the instance of memory device 100, memory element 140) may also be formed within via 131, allowing the entire portion of memory device 100/200 to be self-aligned. [0032] Memory access devices formed in accordance with any of the previously disclosed embodiments may be formed at low temperatures, such as at or below 400 °C. The manufacturing process of memory access devices, such as for example, conventional silicon-based junction diodes, requires much higher processing temperatures. Low temperature formation allows for three-dimensional stacking of multiple memory levels without destruction of previously formed levels. Additionally, because the memory access devices are formed in a self-aligned manner, the methods are very cost-effective. Additionally, the use of Cu- or Ag- doped chalcogenide materials allows the memory access devices to have high current density, e.g., greater than 106 A/cm2 while maintaining and low threshold ON voltage, e.g., less than IV. [0033] The cross point memory array 100/200 (FIGS. 2A/2B and 3A/3B) may also be fabricated as part of an integrated circuit. The corresponding integrated circuits may be utilized in a typical processor system. For example, FIG. 8 illustrates a simplified processor system 500 WO 2011/084482 PCT/US2010/060508 which includes a memory device 100/200 including the self-aligned Cu- or Ag- doped chalcogenide memory access devices 20, in accordance with any of the above described embodiments. A processor system, such as a computer system, generally comprises a central processing unit (CPU) 510, such as a microprocessor, a digital signal processor, or other programmable digital logic devices, which communicates with an input/output (I/O) device 520 over a bus 590. The memory device 100/200 communicates with the CPU 510 over bus 590 typically through a memory controller. In the case of a computer system, the processor system 500 may include peripheral devices such as removable media devices 550 (e.g., CD-ROM drive or DVD drive) which communicate with CPU 510 over the bus 590. If desired, the memory device 100/200 may be combined with the processor, for example CPU 510, as a single integrated circuit. [0034] The above description and drawings should only be considered illustrative of example embodiments that achieve the features and advantages described herein. Modification and substitutions to specific process conditions and structures can be made. Accordingly, the claimed invention is not to be considered as being limited by the foregoing description and drawings, but is only limited by the scope of the appended claims. 11 WO 2011/084482 PCT/US2010/060508 CLAIMS What is claimed as new and desired to be protected by Letters Patent of the United States is: 1. A method of forming a memory device comprising: forming an insulating material over a first conductive electrode; patterning the insulating material to form vias that expose portions of the first conductive electrode; forming a memory access device within the vias of the insulating material; and forming a memory element over the memory access device, wherein data stored in the memory element is accessible via the memory access device, wherein the memory access device is formed of a doped chalcogenide material, and the memory access device is formed using a self-aligned fabrication method. 2. The method of claim 1, wherein the doped chalcogenide material comprises one of the group consisting of a Cu-doped combination of Se and/or Te alloyed with one or more of Sb, In and Ge. 3. The method of claim 1, wherein the doped chalcogenide material comprises one of the group consisting of a Ag-doped combination of Se and/or Te alloyed with one or more of Sb, In and Ge. 4. The method of claim 1, wherein the self-aligned fabrication method for forming the memory access device further comprises depositing the doped chalcogenide material using electrochemical deposition. 5. The method of claim 1, wherein the self-aligned fabrication method for forming the memory access device further comprises depositing the doped chalcogenide material using vapor phase deposition. 12 WO 2011/084482 PCT/US2010/060508 6. The method of claim 4, wherein during the electrochemical deposition process, the doped chalcogenide material is only formed on the exposed portions of the first conductive electrode. 7. The method of claim 1, wherein vias formed in the insulating material have a width of 40nm or less. 8. The method of claim 1, wherein the self-aligned fabrication method for forming the memory access device occurs at temperatures at or below 400 °C. 9. The method of claim 4, wherein forming a memory access device further comprises planarizing the electrochemically deposited doped chalcogenide material to a top surface of the insulating material. 10. The method of claim 1, wherein the first conductive electrode is a word line. 11. The method of claim 1, further comprising forming a second conductive electrode over the memory element. 12. The method of claim 11, wherein the second conductive electrode is a bit line. 13. The method of claim 11, wherein the memory device is a cross point memory. 14. The method of claim 13, further comprising forming a plurality of repeated levels of individual memory devices, each repeated level comprising the first conductive electrode, the insulating material, the memory access device, the memory element and the second conductive electrode, wherein the cross point memory device comprises multiple levels of memory elements and memory access devices such that it is a three-dimensionally stacked memory device, and wherein each memory access devices is a select device for a corresponding memory element. 13 WO 2011/084482 PCT/US2010/060508 15. A method of forming a memory device comprising: forming an insulating material over a first conductive electrode; patterning the insulating material to form vias that expose portions of the first conductive electrode; forming a memory access device within the vias of the insulating material using a self-aligned fabrication method; and forming a memory element over the memory access device, wherein data stored in the memory element is accessible via the memory access device. 16. The method of claim 15, wherein the self-aligned fabrication method further comprises: depositing a chalcogenide material; depositing a dopant material on the chalcogenide material; and causing the chalcogenide material to become doped with the dopant material. 17. The method of claim 16, wherein the chalcogenide material is deposited using vapor phase deposition. 18. The method of claim 16, wherein the chalcogenide material is deposited using electrochemical deposition. 19. The method of claim 16, wherein the dopant material is selectively deposited on the chalcogenide material using one of electrochemical deposition or physical vapor deposition of the dopant material. 20. The method of claim 16, wherein the chalcogenide material is a combination of Se and/or Te alloyed with one or more of Sb, In and Ge. 14 WO 2011/084482 PCT/US2010/060508 21. The method of claim 16, wherein the dopant material is one of Cu or Ag. 22. The method of claim 18, wherein during the electrochemical deposition process, the chalcogenide material is only formed on the exposed portions of the conductive electrode. 23. The method of claim 16, further comprising planarizing the dopant material and portions of the doped chalcogenide material extending above the vias in the insulating material. 24. A method of forming a memory device comprising: forming an insulating material over a first conductive electrode; patterning the insulating material to form vias that expose portions of the first conductive electrode; forming a memory access device within the vias of the insulating material using a self-aligned fabrication method; and forming a memory element over the memory access device, wherein data stored in the memory element is accessible via the memory access device, wherein the self-aligned fabrication method further comprises: depositing a chalcogenide material; infusing the chalcogenide material with Ge; and depositing a dopant material on the Ge-infused chalcogenide material with a dopant material; and causing the Ge-infused chalcogenide material to become doped with the dopant material. 25. The method of claim 24, wherein the chalcogenide material is deposited using vapor phase deposition. 15 WO 2011/084482 PCT/US2010/060508 26. The method of claim 24, wherein the chalcogenide material is deposited using electrochemical deposition. 27. The method of claim 24, wherein the chalcogenide material is infused with Ge using gas-cluster ion beam modification. 28. The method of claim 24, wherein the dopant material is selectively deposited on the Ge-infused chalcogenide material using one of electrochemical deposition or physical vapor deposition of the dopant material. 29. The method of claim 24, wherein the chalcogenide material is a combination of Se and/or Te alloyed with one or more of Sb and In. 30. The method of claim 24, wherein the dopant material is one of Cu or Ag. 31. The method of claim 24, wherein during the electrochemical deposition process, the chalcogenide material is only formed on the exposed portions of the conductive electrode. 32. The method of claim 24, further comprising planarizing the dopant material and portions of the doped chalcogenide material extending above the vias in the insulating material. 16 |
The present invention relates to a process for preparing a wafer for chip packaging that minimizes stress and torque on wafer components during back grinding. The wafer has fabricated thereon a plurality of dies in a die side thereof opposite a back side thereof. A protective coating is spun on the die side to protect the dies. The wafer is separated into a plurality of connected pieces by scratching or cutting a recess into streets or scribe lines in the die side. The connected pieces of the wafer are secured to a surface with the back side thereof exposed. Material is removed from the back side of the wafer by chemical, mechanical, or chemical-mechanical methods until each piece is separated or disconnected from the other pieces. The protective coating is removed. The pieces can be situated upon a flexible surface that is stretched to increase the separation between pieces. Each die in the die side of each piece is then packaged into a die package. |
What is claimed and desired to be secured by United States Letters Patent is: 1. A die singulation method comprising:providing a semiconductor substrate having a back side opposite a die side that has a plurality of die formed therein; adhesively adhering the die side to a stretchable substrate; forming a recess in the die side, wherein: a first portion of said semiconductor substrate having at least one die therein is on one side of said recess; and a second portion of said semiconductor substrate having at least one die therein is on a side of said recess opposite that of said first portion of said semiconductor substrate; abrading said back side of said semiconductor substrate to separate from contact the first portion of said semiconductor substrate from the second portion of said semiconductor substrate; and stretching said stretchable substrate to increase the separation between said first and second portions of said semiconductor substrate. 2. The method as defined in claim 1, further comprising packaging at least one of the die into a die package.3. The method as defined in claim 1, wherein the stretchable substrate comprises a plastic sheet.4. The method as defined in claim 3, wherein the plastic sheet and the die side are adhesively adhered to a double-sided adhesive tape.5. The method as defined in claim 1, wherein abrading said back side of said semiconductor substrate comprises grinding the back side of the first and second portions of the semiconductor substrate.6. The method as defined in claim 1, further comprising, after forming said recess, securing the die side of said semiconductor substrate to a surface such that the back side thereof is exposed.7. The method as defined in claim 1, wherein:said first and second portions of said semiconductor substrate each have a first thickness; and abrading said back side of said semiconductor substrate comprises mechanically removing material from said back side of said semiconductor substrate until said first and second portions of said semiconductor substrate each have a second thickness that is less than said first thickness. 8. The method as defined in claim 1, wherein the recess is a street or scribe line formed by cutting the die side of the semiconductor substrate.9. The method as defined in claim 1, wherein abrading said back side of said semiconductor substrate changes the thickness of said first and second portions of said semiconductor substrate to be in a range from about 0.2 millimeters to about 0.762 millimeters.10. The method as defined in claim 1, further comprising, prior to adhesively adhering said die side to said stretchable substrate, adhesively adhering said stretchable substrate, on a side thereof that is opposite contact with said die side of said semiconductor substrate, to a rigid planar surface.11. A die singulation method comprising:providing a semiconductor substrate having a back side opposite a die side that has the plurality of die formed therein; adhesively adhering the die side to a stretchable substrate; forming a recess in the die side to define first, second and third portion of the semiconductor substrate; abrading the third portion of the semiconductor substrate at the back side thereof to disconnect the first and second portions of said semiconductor substrate; and stretching said stretchable substrate to increase the separation between said first and second portions of said semiconductor substrate. 12. The method as defined in claim 11, further comprising, prior to adhesively adhering said die side to said stretchable substrate, adhesively adhering said stretchable substrate, on a side thereof that is opposite contact with said die side of said semiconductor substrate, to a rigid planar surface.13. The method as defined in claim 12, wherein forming said recess comprises cutting a scribe line in the die side of said semiconductor substrate.14. The method as defined in claim 13, wherein cutting said scribe line in the die side of said semiconductor substrate is a process selected from a group consisting of:moving a scribe blade under force across the die side of the semiconductor substrate so as to form said recess; and cutting into the die side of the semiconductor substrate using a rotating saw blade to form said recess. 15. A die singulation method comprising:providing a semiconductor substrate having a back side opposite a die side that has a plurality of die formed therein; adhesively adhering the die side to a stretchable substrate; forming a plurality of recesses in the die side to define between adjacent recesses a plurality of pieces of said semiconductor substrate, each said piece having at least one die therein; abrading the back side of the semiconductor substrate to separate the pieces one from another; and stretching said stretchable substrate to increase the separation between the pieces one from another. 16. The method as defined in claim 15, wherein the stretchable substrate comprises a flexible plastic sheet.17. The method as defined in claim 15, wherein forming said plurality of recesses comprises cutting a plurality of parallel and perpendicular scribe lines in the die side of said semiconductor substrate.18. The method as defined in claim 15, wherein abrading the back side of the semiconductor substrate is a process selected from a group consisting of a mechanical planarization and a chemical-mechanical planarization.19. The method as defined in claim 15, further comprising, prior to adhesively adhering said die side to said stretchable substrate, adhesively adhering said stretchable substrate, on a side thereof that is opposite contact with said die side of said semiconductor substrate, to a rigid planar surface.20. A die singulation method comprising:providing a semiconductor substrate having a back side opposite a die side, wherein: said die side has formed therein a plurality of die and recesses; and adjacent recesses separate each die from the other said dies; adhesively adhering the die side to a stretchable substrate; forming a plurality of recesses in the die side to define between adjacent recesses a plurality of pieces of said semiconductor substrate, each said piece having at least one die therein; abrading the back side of the semiconductor substrate to separate the semiconductor substrate into a plurality of pieces each having: a predetermined thickness; said back side opposite said die side; and one die of said plurality of dies formed in the die side thereof; and stretching said stretchable substrate to increase the separation between the pieces one from another. 21. The method as defined in claim 20, wherein the stretchable substrate comprises a flexible plastic sheet.22. A chip packaging method comprising:providing a semiconductor wafer having a back side opposite a die side including a plurality of die separated by a plurality of scratches cut into scribe lines on the die side; forming a protective coating upon the die side; adhesively adhering the protective coating to a stretchable substrate; grinding the back side to divide the semiconductor wafer into plurality of separated, unconnected die, each said die having said back side opposite said die side and a thickness in a range from about 0.762 millimeters to about 0.2 millimeters; stretching the stretchable substrate to increase the distance between adjacent dice; and packaging each said die. 23. The method as defined in claim 22, wherein the stretchable substrate is composed of plastic.24. The method as defined in claim 22, wherein the stretchable substrate comprising a sheet composed of plastic.25. The method as defined in claim 24, wherein:the sheet composed of plastic is situated upon a double-sided adhesive tape; and the double-sided adhesive tape is upon a table. 26. A chip packaging method comprising:providing a semiconductor wafer having integrated circuitry including a plurality of dies formed within a die side opposite a back side thereof; cutting recesses into the die side, each said recess having an opening at said die side and a closed end proximal said back side; forming a protective covering over said die side; adhesively adhering the protective coating to a stretchable substrate with the back side of the semiconductor substrate exposed; thinning the thickness of the semiconductor wafer by grinding the back side thereof until the closed end of each of the recesses is breached, whereby the semiconductor wafer is separated into a plurality of unconnected pieces each having said die side and said back side and one die of said plurality of dies formed therein; stretching the stretchable substrate to increase the distance between adjacent dice; and packaging each said die. 27. The chip packaging method as defined in claim 26, wherein:the stretchable substrate is composed of plastic that is situated upon a double-sided adhesive tape; and the double-sided adhesive tape is upon a table. 28. The chip packaging method as defined in claim 26, wherein said thinning is a process selected from a group consisting of a mechanical process and a chemical-mechanical process.29. The chip packaging method as defined in claim 26, wherein cutting recesses into the die side is a sawing operation using a saw blade that saws into but not through said semiconductor wafer.30. The chip packaging method as defined in claim 26, wherein cutting recesses into the die side cuts into but not through the semiconductor wafer at scribe lines in the die side.31. The chip packaging method as defined in claim 26, wherein, during said thinning, said protective covering adheres said semiconductor wafer to a table with the back side of the semiconductor wafer exposed.32. A chip packaging method for a semiconductor wafer having fabricated therein a plurality of dies in a die side of said semiconductor wafer that is opposite a back side, wherein the die side has a photoresist layer thereover, the method comprising:cutting into the photoresist layer and into but not through said die side so as to define in said die side a plurality of connected pieces; removing material, by a process selected from a group consisting of mechanical planarization and chemical-mechanical planarization, from the back side of said semiconductor wafer until each of said plurality of pieces is separated from the other of said plurality of pieces, each said piece having: said back side opposite said die side; and at least one die formed in the die side thereof; stretching a flexible surface upon which the semiconductor wafer is situated so as to increase the separation between each said piece and the other of said plurality of pieces, wherein the flexible surface is composed of plastic that is situated upon a double-sided adhesive tape, and the double-sided adhesive tape is upon a rigid planar surface; removing said photoresist layer from the die side of each said piece; and packaging in a die package each said at least one die in the die side of each said piece of said plurality of pieces. 33. A chip packaging method for a semiconductor wafer having fabricated therein a plurality of dies in a die side of said semiconductor wafer that is opposite a back side, the method comprising:forming a photoresist layer over said die side; cutting into the photoresist layer and into but not through said die side so as to define in said die side a plurality of connected pieces; removing material, by a process selected from a group consisting of chemical etching and chemical-mechanical planarization, from the back side of said semiconductor wafer until each of said plurality of pieces is separated from the other of said plurality of pieces, wherein the photoresist layer protects said die side from chemicals in the material removal process, each said piece having: said back side opposite said die side; and at least one die formed in the die side thereof; stretching a flexible surface upon which the semiconductor wafer is situated so as to increase the separation between each said piece and the other of said plurality of pieces; removing said photoresist layer from the die side of each said piece; and packaging in a die package each said at least one die in the die side of each said piece of said plurality of pieces. |
This is a continuation of U.S. patent application Ser. No. 09/026,999, filed on Feb. 23, 1998, now U.S. Pat. No. 6,162,703, titled Packaging Die Preparation, which is incorporated herein by reference.BACKGROUND OF THE INVENTION1. The Field of the InventionThe present invention relates to fabrication of semiconductor structures. More particularly, the present invention relates to chip packaging processes and pre-packaging wafer preparation including wafer thinning and die separation.2. The Relevant TechnologyIn the microelectronics industry, a substrate refers to one or more semiconductor layers or structures which includes active or operable portions of semiconductor devices. In the context of this document, the term "semiconductive substrate" is defined to mean any construction comprising semiconductive material, including but not limited to bulk semiconductive material such as a semiconductive wafer, either alone or in assemblies comprising other materials thereon, and semiconductive material layers, either alone or in assemblies comprising other materials. The term substrate refers to any supporting structure including but not limited to the semiconductive substrates described above. The term semiconductor substrate is contemplated to include such structures as silicon-on-insulator and silicon-on-sapphire.In the microelectronics industry, the process of miniaturization entails the shrinking of individual semiconductor devices and crowding more semiconductor devices into a given unit area. Included in the process of miniaturization is the effort to shrink the size of chip or die packages in the fabrication sequence, chip packaging follows the fabrication of chips or dies upon a semiconductor substrate or wafer.After a semiconductor wafer has been fabricated and the circuits thereon have been processed to completion, the die or chip packaging process begins. The purpose of the die or chip packaging process is to place individual die into a package which can then be inserted into a printed circuit board or other substrate so as to connect the die to a larger functional circuit.Prior to chip packaging, other steps may be needed to be undertaken in order to prepare a wafer. One step is the reducing of the thickness of a wafer. It is desirable to reduce the thickness of a wafer because a greater amount of time and expense is required to saw through a thick wafer in order to separate the dies thereon. Typically, wafer sawing produces a precise die edge. Nevertheless, sawing adds expense and processing time, and requires expensive machinery.It may also be desirable to thin the wafer if contaminants have entered into the backside of a wafer opposite its circuit side where the electrical circuitry has been formed. For instance, dopants may have entered the backside of the wafer during a fabrication process. These dopants will form electrical junctions that may interfere with the circuitry on the front side of the wafer. Thus, in order for the electrical circuits to properly operate, the thinning of the contaminated portion of the backside of the wafer may be required.Conventionally, thing of the wafer is performed prior to separating the dies from the wafer. This thinning step typically reduces the wafers to a thickness between 0.762 millimeters to about 0.2 millimeters. Several processes are available to perform the thinning operation. Specifically, a mechanical or chemical-mechanical operation, such as planarization, can be used to thin the wafers. Also, the backside of the wafer can be chemically etched in order to reduce the thickness thereofThe wafer thinning operation can cause scratching of the top side of the wafer or the inducement of stress during the abrading operation which may cause the wafer to break. In order to perform the thinning operation, the circuit side of the wafer is placed face down upon a surface. Preferably, the circuit side of the wafer will be protected from scratching or other surface defect. A material removal operation then begins to remove material from the backside of the wafer.Where material is moved from the backside of the wafer using a chemical etchant, it is also necessary to protect the circuit side of the wafer. Such a method includes the forming of a photoresist layer on the circuit side of the wafer. Sheets composed of a polymer material having an adhesive back can also be fitted over the circuit side of the wafer to protect the same.It is desirable to thin wafers before packaging in order to reduce the cost of packaging the dies after separation. The separation process becomes expensive as the wafer thickness goes up. Particularly, a deeper die attach cavity is required if a wafer is thicker. As such, the combination of a deeper die attach cavity and the thicker die results in a more expensive chip package. Thus, wafer thinning is an important part of reducing the cost of chip packaging.FIG. 1 depicts a grinding table 12 having an adhesive film 14 thereon. A semiconductor substrate 10 is on adhesive film 14. Semiconductor substrate 10 includes a die side 16 and abase layer 18. Base layer 18 has a back surface 20 thereon. Die side 16 has a plurality of die formed therein which are to be singulated by a division of semiconductor substrate 10 into a plurality of pieces. Back surface 20 is subjected to a back grinding process. The purpose of the back grinding process to be performed upon back surface 20 is to thin base layer 18 prior to singulating die side 16. As seen in FIG. 1, a distance 25 indicates a distance between a center of semiconductor substrate 10 and a grinding force 26 applied to back surface 20 by a grinding wheel 24 via a grinding pad 22 thereon. With the increase in distance 25 and/or an increase in grinding force 26, the torque product of distance 25 and grinding force 26 increases. With the increase in torque, the propensity of semiconductor substrate 10 to crack or break improperly also increases. As such, it would be desirable to reduce the propensity of semiconductor substrate 10 to break during a substrate thinning process.After wafer thinning, the wafer is divided. Conventional techniques for die separation involve sawing and scribing processes. The sawing process uses a saw and a table to cut scribe or saw lines in the circuit side of the wafer. The wafer is placed upon the table and a rotating saw blade is brought down in contact with the circuit side of the wafer. As each scribe or saw line is cut into the wafer, a stress line forms along the crystalline interior of the wafer substantially perpendicular to the backside of the wafer. After the scribe or saw lines are cut into the wafer, a stress is applied to the scribe lines to separate the wafer and individual die. This stress may be applied via a roller or other pressure technique. Alternatively, the rotating saw blade can cut all the way through the wafer to separate the wafer and individual die.An alternative technique to sawing the wafer into singulated dies is a scribing technique which cuts a scratch along scribe lines on the circuit side of the wafer by application of a force from a diamond-tip scribe. As in sawing, the dies are separated by applying a stress to the wafer, such as a roller applied to a surface of the wafer. Upon the application of the pressure from the roller, individual dies will be separated as they break away from the consolidated wafer along the scratched scribe lines. Due to the crystalline structure of the wafer, the separation of the die will follow the scribe line approximately perpendicular to the opposing surfaces of the wafer. As such, stress will cause the wafer to break along the scratched lines.FIG. 2 depicts semiconductor substrate 10 including die side 16 and base layer 18. Semiconductor substrate 10 has saw or scribe lines marked within die side 16 and above stress lines. Each scribe line is cut into die side 16 by a cutting tool 28 with a cutting force 30. Cutting tool 28 can be a diamond tipped scribe or a rotating saw blade. Once the saw or scribe lines are cut within die side 16, a roller 32 having a surface 34 applies a roller force 36 to die side 16 to separate a singulated die 19 along each stress line.While it is desirable to thin a wafer prior to singulating the dies thereon due to the lower cost of packaging and the shorter time of throughput, thinning the wafer also causes an increased likelihood of breaking the wafer prematurely and prior to singulation. Breaking the wafer prematurely can occur during any of a chemical, mechanical, or chemical-mechanical thinning operation, wherein forces are induced within the wafer. This problem is further compounded by a desire to fabricate more dies upon a semiconductor wafer. In order to put more dies on a semiconductor wafer, the diameter of a semiconductor wafer is increased. With an increase in diameter, an increase in stress is realized as pressure is applied to the wafer during scribing or sawing operations. As the radius of the pressure from the center of the wafer increases, the torque product also increases and the propensity of the larger wafer to break goes up. A warped or cracked wafer reduces yield and causes other problems in the subsequent chip packaging process.Given the foregoing, it would be advantageous to reduce the forces, including stress-induced forces, in the wafer during the wafer thinning process. It would also be desirable to accomplish a technique of thinning the wafer prior to packaging individual die while decreasing the propensity of the wafer to break. It would also be advantageous to develop such a technique for use with larger wafers.SUMMARY OF THE INVENTIONThe present invention relates to a pre-packaging chip processing method that avoids stressing upon a semiconductor substrate that would otherwise cause a breakage. In general, the present invention contemplates thing of a semiconductor substrate having dies thereon after dividing the substrate into pieces or singulated dies. By dividing the substrate prior to thinning, the substrate is subjected to lower stress and torque during the thinning process.In the inventive method, a semiconductor substrate having a die side opposite a back side is provided. The die side has a plurality of die formed therein. The back of the semiconductor substrate is secured to a surface with the die side of thereof exposed. Saw or scribe lines are cut along streets into the die side of the semiconductor substrate. As such the saw or scribe lines separate pieces of the semiconductor substrate. A protective layer is applied to the die side and the semiconductor substrate is then inverted onto a surface. In this position, the protective layer is secured to the surface and the back side of the semiconductor substrate is exposed. The back side of the semiconductor substrate is then subjected to a material removal process until each piece in between the saw or scribe lines is separated from the other pieces. The saw or scribes lines that are cut into the die side serve to relieve or reduce the stress and other forces that act upon the substrate during the material removal process. Preferably, each piece will have one die thereon. Individual dies are then put into chip packages.These and other features of the present invention will become more fully apparent from the following description and appended claims, or may be learned by the practice of the invention as set forth hereinafter.BRIEF DESCRIPTION OF THE DRAWINGSIn order that the manner in which the above-recited and other advantages of the invention are obtained, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered to be limiting of its scope, the invention will be described and explained with additional specificity and detail through the use of the accompanying drawings in which:FIG. 1 is an elevational cross-section view of a back grinding operation upon a back side of a semiconductor substrate using a grinding wheel;FIG. 2 is a cross-sectional elevational view of a semiconductor substrate upon a dicing table, the semiconductor substrate having been cut along scribe lines, and a pressure roller applying a downward pressure upon the circuit side of the semiconductor substrate so as to singulate each die on the semiconductor substrate;FIG. 3 is a cross-sectional elevational view of a film that is upon a surface, where saw or scribe lines have been cut into a semiconductor substrate situated upon the film;FIG. 4 depicts the semiconductor substrate seen in FIG. 3, where the semiconductor substrate has been inverted and placed upon a surface with a protective coating therebetween, and where a material removal process is performed upon the back side of the semiconductor substrate;FIG. 5 depicts the semiconductor substrate seen in FIG. 4, after the material removal process upon the back surface of the semiconductor substrate has been performed sufficient to divide the semiconductor substrate into separated thinned singulated pieces.DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTSReference will now be made to the drawings wherein like structures will be provided with like reference designations. It is to be understood that the drawings are diagrammatic and schematic representations of the embodiment of the present invention and are not drawn to scale.The present inventive method first performs a substrate singulation process and then performs a back side material removal process. As such, embodiments of the inventive method will be discussed below by first referring to FIG. 2 and then FIG. 1.As seen in FIG. 2, in the case of the inventive method, semiconductor substrate 10 has not been subjected to a thinning operation. The process depicted in FIG. 2 is performed upon semiconductor substrate 10 by scratching saw or scribe lines within die side 16 and above each stress line.FIG. 3 shows the result of the operation depicted in FIG. 2, where table 12 has a layer 14 thereon and semiconductor substrate 10 is situated upon layer 14. Saw or scribe lines 31 have been cut by scribe or saw 28 with force 30 into semiconductor substrate 10 at several locations in die side 16. Instead of applying force 36 to semiconductor substrate 10 with roller 32 as seen in FIG. 2, semiconductor substrate 10 is turned upside down and placed upon a layer 14 as seen in FIG. 4.FIG. 4 depicts the structure seen in FIG. 3 following further processing in which surface 12 has layer 14 thereon, and semiconductor substrate 10 is in contact with layer 14. Semiconductor substrate 10 is secured to surface 12 by layer 14 so as to be relatively stable with respect to surface 12. The reorientation of semiconductor substrate 10 seen in FIGS. 3 and 4 leaves back surface 20 of semiconductor substrate 10 exposed. A grinding wheel 24 having a grinding surface 22 is depicted in FIG. 4. Grinding wheel 24 is used in a material removal process performed upon back surface 20. As with FIG. 3, FIG. 4 depicts a grinding table 12 having an adhesive film 14 thereon. Alternatively, film 14 can be a die covering so as to protect a die side 16 situated thereon. The purpose of the material removal process to be performed upon back surface 20 is to thin base layer 18 and separate semiconductor substrate 10 into separate pieces prior to packaging each singulated die on die side 16. The present invention contemplates that the thinning process can be performed upon a semiconductor substrate by mechanical, chemical, or chemical-mechanical processes, or combinations thereof.As grinding wheel 24 abrades back surface 20, semiconductor substrate 10 is thinned. The back grinding process upon back surface 20 of semiconductor substrate 10 continues until each piece 19 is separated from other pieces 19. Preferably, each piece 19 will have a thickness in a range from about 0.2 millimeters to about 0.762 millimeters after the material removal from back surface 20 of singulated piece 19. Singulated pieces 19 seen in FIG. 5 may have a single die or multiple dies thereon after the back grinding operation. Where more than one die is on a piece 19, further and conventional singulation processing is performed upon the piece so as to separate each die from other dies prior to packaging.Once singulated pieces 19 have been separated as seen in FIG. 5, layer 14, which is preferably flexible, can be stretched so as to further separate singulated pieces 19 one from another. Layer 14 can be a thin flexible plastic film or it can be a double-sided adhesive tape, or a combination of these. The purpose of layer 14 is to secure semiconductor substrate 10 stable relative to table 12 and/or provide a stretching medium so as to separate singulated pieces 19 after the material removal process upon back side 20 has continued until pieces 19 have been separated Once singulated pieces 19 are separated one from another, the removal of singulated pieces 19 from layer 14 becomes simplified since each singulated piece 19 is separated sufficiently one from another.Since semiconductor substrate 10 is first scribed and cut, and then subjected to a material removal process, the prior art problems encountered during thinning are overcome. The cut saw or scribe lines relieve stress in semiconductor substrate 10 prior to grinding or other material removal process.The purpose of the material removal operation is to thin each singulated piece 19 so that a minimal amount of packaging materials can be used for packaging each singulated die 19. FIG. 1 depicts the type of back grinding operation that can perform upon semiconductor substrate 10 as seen in FIG. 4. By grinding upon the scribed semiconductor substrate 10 seen in FIG. 4, less torque and other stress forces are experienced during the back grinding process due to the saw or scribe lines which serve to relieve stress.After the dies are singulated, dies that are known to be functioning properly are selected in a vacuum picking process and placed on a section plate using a vacuum wand. If a plastic flexible film is used for securing the dies to a support surface, the film can be stretched so as to separate the dies one from another and thus aid in the vacuum picking process. From there, dies are inspected and passed on to a die attach station for subsequent chip packaging.The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrated and not restrictive. The scope of the invention is, therefore, indicated by the appended claims and their combination in whole or in part rather than by the foregoing description. All changes that come within the meaning and range of equivalency of the claims are to be embraced within their scope. |
A computer motherboard is described. That motherboard includes a memory controller and a memory section. A first trace couples the memory controller to the memory section, and a second trace couples the memory controller to the memory section. The first trace is joined with the second trace at the memory controller, the second trace is routed in parallel with the first trace, and the second trace is longer than the first trace. Also described is a computer system that includes this motherboard and a memory card. |
What is claimed is: 1. A computer motherboard comprising:a memory controller; a memory section that includes a plurality of memory devices that are separated into a first set and a second set at a junction; a first trace coupling the memory controller to the memory section; and a second trace coupling the memory controller to the memory section, the first trace joined with the second trace at the memory controller and at the junction, the second trace routed in parallel with the first trace, and the second trace being longer than the first trace; wherein the first trace is between about 4 inches and about 8 inches long, the second trace is at least about 2 inches longer than the first trace and is between about 6 and about 14 inches long, and the memory devices are separated from each other by between about 0.1 inch and about 1 inch. |
FIELD OF THE INVENTIONThe present invention relates to motherboard interconnects.BACKGROUND OF THE INVENTIONFIG. 1 represents a computer system that includes a typical DRAM bus far end cluster. System 100 includes memory controller 101 that is coupled to far end cluster 102 at "T" junction 103 by relatively long trace 104. Far end cluster 102 includes several closely spaced DRAMs 105. DRAMs 105 are separated into first set 106 and second set 107 at junction 103. First signal line 108 passes from junction 103 to last DRAM 109 included in first set 106 and second signal line 110 passes from junction 103 to last DRAM 111 included in second set 107.Impedance mismatch between trace 104 and the combination of signal lines 108,110 may result in poor signal integrity for signals that DRAMs 105 receive. FIG. 2 represents a signal waveform that may result when driving a signal into a low impedance far end cluster-like the one illustrated in FIG. 1. Because of the impedance mismatch, signal reflections, which occur when a signal reaches the cluster, produce ledges 201. The load that DRAMs 105 present on signal lines 108, 110, can cause those ledges, e.g., ledge 202, to have slope reversal (i.e., regions where a rising edge experiences a short voltage drop or where a falling edge experiences a short voltage rise).To prevent such ledges from occurring at the DRAM receiver's switching threshold, stable system design may require all timings to be taken after the ledges. For example, if a ledge with slope reversal occurs on a signal's rising edge, it may be necessary to delay the latching of data to ensure that the receiver properly detects a voltage that exceeds the switching threshold. Adding delay to ensure that the receiver switches state as intended may reduce the maximum speed at which signals are driven between memory controller 101 and DRAMs 105. Even when adding this delay, unless there is sufficient noise margin, such ledges might still cause a false trigger to occur, when data is to be latched into a DRAM, if they cause the slew rate to be insufficient to change the state of the input receiver at that time.For example, lines 203 and 204 may designate the input voltage levels required for the receiver to switch-line 203 designating the input high voltage ("Vih") and line 204 designating the input low voltage ("Vil"). When a rising edge passes through Vih, the DRAM receiver will switch from a first state to a second state (e.g., a low state to a high state.) Likewise, when a falling edge passes through Vil, the DRAM receiver will switch from a first state to a second state. The DRAM receiver will properly switch state as long as the voltage exceeds the switching threshold (for a rising edge), or falls below the switching threshold (for a falling edge), when the receiver latches data. As long as ledges 201 occur outside of the switching region, they should not prevent the correct latching of data into the receiver. As a result of system noise, however, receiver thresholds could change dynamically causing ledges, including ledges with slope reversal, to develop within the switching region-even when the system was designed to prevent that effect. If that occurs, incorrect data might be latched into the receiver.Accordingly, there is a need for an improved motherboard interconnect that prevents formation of ledges with slope reversal as a signal rises and falls. There is a need for such a motherboard interconnect that enables DRAM receivers to latch data at a relatively high frequency without risk that such ledges will develop, which cause the receiver to accept incorrect data. The present invention provides such a motherboard interconnect.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 represents a computer system that includes a DRAM bus far end cluster.FIG. 2 illustrates a signal waveform that may result when driving a signal into a low impedance far end cluster like the one illustrated in FIG. 1.FIG. 3 represents an embodiment of the motherboard of the present invention.FIG. 4 contrasts the signal waveform of FIG. 2 with a signal waveform that may result when driving a signal over the motherboard of FIG. 3 and into a low impedance far end cluster.DETAILED DESCRIPTION OF THE PRESENT INVENTIONA computer motherboard is described. That motherboard includes a memory controller and a memory section. The memory controller is coupled to the memory section by first and second traces. The first trace is joined with the second trace at the memory controller, the second trace is routed in parallel with the first trace, and the second trace is longer than the first trace. Also described is a computer system that includes this motherboard and a memory card.In the following description, numerous specific details are set forth such as component types, dimensions, etc., to provide a thorough understanding of the present invention. However, it will be apparent to those skilled in the art that the invention may be practiced in many ways other than those expressly described here. The invention is thus not limited by the specific details disclosed below.FIG. 3 represents an embodiment of a motherboard that implements the present invention. Motherboard 300 includes memory controller 301 and memory section 315. First trace 304 and second trace 316 couple memory section 315 to memory controller 301. First trace 304 and second trace 316 are joined at memory controller 301. Second trace 316 is routed in parallel with first trace 304, and second trace 316 is longer than first trace 304. Traces 304, 316 may be routed on the same printed circuit board ("PCB") layer, or, alternatively, may be routed on different PCB layers.In this embodiment, a plurality of memory devices, e.g., DRAMs, 305 form far end cluster 302. DRAMs 305 may be mounted directly to motherboard 300, or, alternatively, mounted onto a memory card that may be inserted into a socket that is mounted onto motherboard 300 at memory section 315. DRAMs 305 are separated into first set 306, which includes four DRAMs, and second set 307, which also includes four DRAMs, at "T" junction 303. First and second traces 304, 316 meet at junction 303. Junction 303 may be located on motherboard 300, when DRAMs 305 are directly mounted to it, or instead be located on a memory card. DRAMs 305 preferably are closely spaced, such that they are separated from each other by between about 0.1 inch and about 1 inch. In embodiments where DRAMs are mounted onto one side of motherboard 300, or onto one side of a memory card, DRAMs 305 are preferably separated by between about 0.5 inch and about 1 inch. When DRAMs 305 are mounted to both sides of a memory card (e.g., with DRAMs 0, 2, 4, and 6 mounted to one side of the memory card, and DRAMs 1, 3, 5, and 7 mounted to the other side), they preferably are separated by between about 0.1 inch and about 0.5 inch.First signal line 308 passes from junction 303 to last DRAM 309 included in first set 306 and second signal line 310 passes from junction 303 to last DRAM 311 included in second set 307. In a preferred embodiment of the present invention, the length of second trace 316 exceeds the length of first trace 304 by an amount that ensures that the additional time required for a signal to pass over second trace 316 from memory controller 301 to junction 303, when compared to the time required for a signal to pass over first trace 304 from memory controller 301 to junction 303, is about equal to the time required for a signal to pass from junction 303 to last DRAMs 309, 311. The degree to which the length of trace 316 must exceed the length of trace 304 to meet this objective will depend upon the number of DRAMs that are included in far end cluster 302 and the amount of separation between those DRAMs.In a preferred embodiment, first trace 304 should be between about 4 and about 8 inches long and second trace 316 should be between about 2 and about 6 inches longer than trace 304. For example, if first trace 304 is about 4 inches long, then second trace 316 should be between about 6 and about 10 inches long-depending upon the signal delay needed to match the time required for a signal to pass from junction 303 to DRAMs 309, 311. If first trace 304 is about 8 inches long, then second trace 316 should be between about 10 and about 14 inches long. Traces 304, 316 and signals lines 308, 310 preferably should each have a width that is between about 0.003 and about 0.008 inches.FIG. 4 contrasts signal waveform 420 of FIG. 2 with signal waveform 430, which may result when driving a signal over the motherboard of FIG. 3 and into a low impedance far end cluster. Adding second trace 316 removes from the waveform ledges that have slope reversal. In addition, adding second trace 316 increases the slew rate, as any slew rate reduction that results from delaying one-half of the signal edge is more than compensated for by the slew rate increase that results from removing ledges with slope reversal. Increasing slew rate enables switching threshold expansion, which in turn enhances a system's tolerance to noise. Note that all slope reversal near input receiver thresholds is eliminated and the edge is monotonic, even when Vil 435 is lowered to 350 or 300 and Vih 440 is raised to 650 or 700, extending receiver thresholds to 350-650 and 300-700 mV.An improved motherboard interconnect has been described. That interconnect reduces impedance mismatch by adding a second trace between a memory controller and a DRAM far end cluster, and eliminates slope reversal in the signal waveform by making one trace longer than the other. Features shown in the above referenced drawings are not intended to be drawn to scale, nor are they intended to be shown in precise positional relationship. Additional features that may be integrated into the motherboard interconnect of the present invention have been omitted as they are not useful to describe aspects of the present invention. Although the foregoing description has specified a motherboard interconnect that includes certain features, those skilled in the art will appreciate that many modifications and substitutions may be made. For example, the layout for traces 304 and 316 may differ from the one shown here. In addition, a motherboard that includes the described interconnect falls within the spirit and scope of the present invention, even if its memory section (i.e., the section of the motherboard that will receive memory devices) is not yet populated with memory devices. Accordingly, it is intended that all such modifications, alterations, substitutions and additions be considered to fall within the spirit and scope of the invention as defined by the appended claims. |
A method of operating a programmable logic device, including the steps of enabling resources of the programmable logic device being used in a circuit design implemented by the programmable logic device, and disabling unused or inactive resources of the programmable logic device that are not being used in the circuit design. The step of disabling can include de-coupling the unused or inactive resources from one or more power supply terminals. Alternatively, the step of disabling can include regulating a supply voltage applied to the unused or inactive resources. The step of disabling can be performed in response to configuration data bits stored by the programmable logic device and/or in response to user controlled signals. The step of disabling can be initiated during design time and/or run time of the programmable logic device. |
We claim:1. A programmable logic device comprising:a plurality of resources logically subdivided into a plurality of programmable logic blocks;a first voltage supply terminal configured to receive a first supply voltage;a plurality of first switch elements, wherein each first switch element is coupled between one of the programmable logic blocks and the first voltage supply terminal; anda control circuit coupled to the plurality of first switch elements,wherein the control circuit is configured to provide a plurality of control signals for controlling the plurality of first switch elements,wherein the control circuit comprises a plurality of configuration memory cells configured to store a corresponding plurality of configuration data values, wherein the control circuit provides the plurality of control signals in response to the plurality of configuration data values, andwherein the plurality of configuration data values identify unused programmable logic blocks determined at design time.2. The programmable logic device of claim 1, further comprising:a second voltage supply terminal configured to receive a second supply voltage; anda plurality of second switch elements, wherein each second switch element is coupled between one of the programmable logic blocks and the second voltage supply terminal.3. The programmable logic device of claim 1, wherein the control circuit further comprises a plurality of user control terminals configured to receive a corresponding plurality of user control signals, wherein the control circuit further provides the plurality of control signals in response to the plurality of user control signals.4. The programmable logic device of claim 3, wherein the plurality of user control signals identify inactive programmable logic blocks.5. The programmable logic device of claim 4, wherein the inactive programmable logic blocks are determined at run time.6. The programmable logic device of claim 1, wherein each first switch element comprises a transistor.7. The programmable logic device of claim 1, wherein the plurality of configuration data values stored in the plurality of configuration memory cells is part of a configuration bit stream provided for configuring the programmable logic device.8. A programmable logic device comprising:a first voltage supply terminal configured to receive a first supply voltage;a plurality of programmable logic blocks, each programmable logic block comprising one or more resources of the programmable logic device; anda plurality of voltage regulators, wherein each voltage regulator is coupled between one of the programmable logic blocks and the first voltage supply terminal; anda control circuit coupled to each of the voltage regulators, wherein the control circuit is configured to provide a plurality of control signals for controlling the plurality of voltage regulators,wherein the control circuit comprises a plurality of configuration memory cells configured to store a corresponding plurality of configuration data values, wherein the control circuit provides the plurality of control signals in response to the plurality of configuration data values, andwherein the plurality of configuration data values identify unused programmable logic blocks determined at design time.9. The programmable logic device of claim 8, wherein the control circuit further comprises a plurality of user control terminals configured to receive a corresponding plurality of user control signals, wherein the control circuit further provides the plurality of control signals in response to the plurality of user control signals. |
FIELD OF THE INVENTIONThe present invention relates to the disabling of unused and/or inactive blocks in a programmable logic device to achieve lower static power consumption.RELATED ARTTechnology scaling of transistor geometry has resulted in a rapid increase of static power consumption in semiconductor devices. At the current rate of increase, static power consumption will become the dominant source of power consumption in the near future. In many applications, such as those powered by batteries, low static power consumption is a property of great importance, for example, due to the desirability of a long battery life.Programmable logic devices (PLDS), such as field programmable gate arrays (FPGAs), have a significantly higher static power consumption than dedicated logic devices, such as standard-cell application specific integrated circuits (ASICs). A reason for this high static power consumption is that for any given design, a PLD only uses a subset of the available resources. The unused resources are necessary for providing greater mapping flexibility to the PLD. However, these unused resources still consume static power in the form of leakage current. Consequently, PLDs are generally not used in applications where low static power is required.It would therefore be desirable to have a PLD having a reduced static power consumption.SUMMARYIn accordance with one embodiment of the present invention, unused and/or inactive resources in a PLD are disabled to achieve lower static power consumption.One embodiment of the present invention provides a method of operating a PLD, which includes the steps of enabling the resources of the PLD that are used in a particular circuit design, and disabling the resources of the PLD that are unused or inactive. The step of disabling can include de-coupling the unused or inactive resources from one or more power supply terminals. Alternately, the step of disabling can include regulating (e.g., reducing) a supply voltage applied to the unused or inactive resources.In accordance with one embodiment, the step of disabling can be performed in response to configuration data bits stored by the PLD. These configuration data bits can be determined during the design of the circuit to be implemented by the PLD. That is, during the design, the design software is able to identify unused resources of the PLD, and select the configuration data bits to disable these unused resources.The step of disabling can also be performed in response to user-controlled signals. These user-controlled signals can be generated in response to observable operating conditions of the PLD. For example, if certain resources of the operating PLD are inactive for a predetermined time period, then the user-controlled signals may be activated, thereby causing the inactive resources to be disabled.In accordance with another embodiment, a PLD includes a first voltage supply terminal that receives a first supply voltage, a plurality of programmable logic blocks, and a plurality of switch elements, wherein each switch element is coupled between one of the programmable logic blocks and the first voltage supply terminal. A control circuit coupled to the switch elements provides a plurality of control signals that selectively enable or disable the switch elements. The control circuit can be controlled by a plurality of configuration data values stored by the PLD and/or a plurality of user-controlled signals. In an alternate embodiment, each of the switch elements can be replaced by a switching regulator.The present invention will be more fully understood in view of the following description and drawings.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a flow diagram illustrating a conventional design flow used for PLDs.FIG. 2 is a flow diagram illustrating a design flow for a PLD in accordance with one embodiment of the present invention.FIG. 3 is a block diagram of a conventional PLD having four blocks, which are all powered by the same off-chip VDD voltage supply.FIG. 4 is a block diagram of a PLD that implements power-gating switch elements in accordance with one embodiment of the present invention.FIG. 5 is a block diagram of a PLD that implements switching regulators in accordance with one embodiment of the present invention.DETAILED DESCRIPTIONIn accordance with one embodiment of the present invention, unused and inactive resources in a programmable logic device (PLD), such as a field programmable gate array (FPGA), are disabled to achieve lower static power consumption. The present invention includes both an enabling software flow and an enabling hardware architecture, which are described in more detail below. Unused resources of the PLD can be disabled when designing a particular circuit to be implemented by the PLD (hereinafter referred to as "design time"). In addition, resources of the PLD that are temporarily inactive can be disabled during operation of the PLD (hereinafter referred to as "run time").FIG. 1 is a flow diagram 100 illustrating a conventional design flow used for PLDs. Initially, a user designs a circuit to be implemented by the PLD (Step 101). This user design is described in a high-level specification, such as Verilog or VHDL. The high-level specification is first synthesized to basic logic cells available on the PLD (Step 102). A place and route process then assigns every logic cell and wire in the design to some physical resource in the PLD (Step 103). The design is then converted into a configuration bit stream, in a manner known to those of ordinary skill in the art (Step 104). The configuration bit stream is then used to configure the device by setting various on-chip configuration memory cells (Step 105). While modern design flows may be much more complex, they all involve the basic steps defined by flow diagram 100.In accordance with the present invention, unused resources of the PLD are identified during the design time, following the place and route process (Step 103). These unused resources are then selectively disabled during the design time. As described below, there are several ways to disable the unused resources. By selectively disabling the unused resources at design time, significant static power reduction may be achieved with no performance penalty.FIG. 2 is a flow diagram 200 illustrating a design flow in accordance with one embodiment of the present invention. Similar steps in flow diagrams 100 and 200 are labeled with similar reference numbers. Thus, flow diagram 200 includes Steps 101-105 of flow diagram 100, which are described above. In addition, flow diagram 200 includes the step of disabling unused resources in the PLD (Step 201). This step of disabling unused resources is performed after the place and route process has been completed in Step 103, and before the configuration bit stream is generated in Step 104. As described in more detail below, the unused resources are disabled by disabling predetermined programmable logic blocks of the PLD.In another embodiment, further power savings are obtained by disabling temporarily inactive resources of the configured PLD during run time. Often, the entire design or parts of the design are temporarily inactive for some period of time. If the inactive period is sufficiently long, it is worthwhile to disable the inactive resources to reduce static power consumption. In a preferred embodiment, the decision of when to disable a temporarily inactive resource is made by the designer. In this embodiment, the user logic is provided access to a disabling mechanism, which enables the inactive resources to be disabled dynamically.There are a number of techniques to disable resources in a PLD. In accordance with one embodiment, the PLD is logically subdivided into a plurality of separate programmable logic blocks. As described below, each programmable logic block may comprise one or more of the resources available on the programmable logic device. Switch elements are used to couple each of the programmable logic blocks to one or more associated voltage supply terminals (e.g., VDD or ground). The switch elements are controlled to perform a power-gating function, wherein unused and/or inactive programmable logic blocks are disabled (e.g., prevented from receiving power or receiving a reduced power). Preferably; only one of the voltage supply terminals (VDD or ground) is power-gated, thereby reducing the speed and area penalties associated with the switch elements. When the switch elements are controlled to de-couple the associated programmable logic blocks from the associated supply voltage, these programmable logic blocks are effectively disabled, thereby dramatically reducing the static power consumption of these blocks.FIG. 3 is a block diagram of a conventional PLD 300 having four programmable logic blocks 301-304, which are all powered by the same off-chip VDD voltage supply 305. Note that all four programmable logic blocks 301-304 are coupled to receive the VDD supply voltage during normal operating conditions, even if some of these blocks are not used in the circuit design.FIG. 4 is a block diagram of a PLD 400 in accordance with one embodiment of the present invention. Similar elements in FIGS. 3 and 4 are labeled with similar reference numbers. Thus, PLD 400 includes programmable logic blocks 301-304 and VDD voltage supply 305. In addition, PLD 400 includes switch elements 401-408, and control circuit 409. In the described embodiment, switch elements 401-404 are implemented by PMOS power-gating transistors 451-454, respectively, and switch elements 405-408 are implemented by NMOS power-gating transistors 455-458, respectively. In other embodiments, switch elements 401-408 may be any switch known to those ordinarily skilled in the art. Control circuit 409 is implemented by inverters 411-414, NOR gates 421-424, configuration memory cells 431-434, and user logic input terminals 441-444.NOR gates 421-424 and inverters 411-414 are configured to generate power-gating control signals SLEEP1-SLEEP4 and SLEEP#1-SLEEP#4 in response to the configuration data values CD1-CD4 stored in configuration memory cells 431-434, respectively, and the user control signals UC1-UC4 provided on user logic input terminals 441-444, respectively.For example, NOR gate 421 is coupled to receive configuration data value CD1 from configuration memory cell 431 and user control signal UC1 from user logic input terminal 441. If either the configuration data value CD1 or the user control signal UC1 is activated to a logic high state, then NOR gate 421 provides an output signal (SLEEP#1) having a logic "0" state. In response, inverter 411, which is coupled to the output terminal of NOR gate 421, provides an output signal (SLEEP1) having a logic "1" state.The SLEEP1 signal is applied to the gate of PMOS power-gating transistor 451, which is coupled between block 301 and the VDD voltage supply terminal. The SLEEP#1 signal is applied to the gate of NMOS power-gating transistor 455, which is coupled between block 301 and the ground voltage supply terminal. The logic "0" state of the SLEEP#1 signal causes NMOS power-gating transistor 455 to turn off, thereby de-coupling block 301 from the ground supply voltage terminal. Similarly, the logic "1" state of the SLEEP, signal causes PMOS power-gating transistor 451 to turn off, thereby de-coupling block 301 from the VDD supply voltage terminal. De-coupling block 301 from the VDD and ground supply voltage terminals effectively disables block 301, thereby minimizing the static leakage current in this block.If both the configuration data value CD1 and the user control signal UC1 are de-activated to a logic low state, then NOR gate 421 provides a SLEEP#1 signal having a logic "1" state, and inverter 411 provides a SLEEP1 signal having a logic "0" state. The logic "1" state of the SLEEP#1 signal causes NMOS power-gating transistor 455 to turn on, thereby coupling block 301 to the ground supply voltage terminal. Similarly, the logic "0" state of the SLEEP1 signal causes PMOS power-gating transistor 451 to turn on, thereby coupling block 301 to the VDD supply voltage terminal. Coupling block 301 to the VDD and ground supply voltage terminals effectively enables block 301.Programmable logic block 302 may be enabled and disabled in response to configuration data value CD2 and user control signal UC2, in the same manner as block 301. Similarly, programmable logic block 303 may be enabled and disabled in response to configuration data value CD3 and user control signal UC3, in the same manner as block 301. Programmable logic block 304 may be enabled and disabled in response to configuration data value CD4 and user control signal UC4, in the same manner as block 301.As described above, when a programmable logic block is used and active, the associated power-gating transistors are turned on. Conversely, when a programmable logic block is unused or inactive, the associated power gating transistors are turned off. The SLEEP1-SLEEP4 and SLEEP#1-SLEEP#4 signals can be controlled by the configuration data values CD1-CD4 stored by configuration memory cells 431-434, which are best suited for disabling the associated blocks at design time. If a block is not disabled at design time, this block can be disabled at run time by the user control signals UC1-UC4, which may be generated by the user logic, or by other means.In accordance with another embodiment of the present invention, some blocks have multiple supply voltages. In this case all of the supply rails should be power-gated to achieve maximum power reduction. In accordance with another embodiment, only one switch element may be associated with each block. That is, the blocks are power-gated by de-coupling the block from only one power supply terminal, and not both the VDD and ground supply voltage terminals, thereby conserving layout area.The granularity of the power-gated programmable logic blocks can range from arbitrarily small circuits to significant portions of the PLD. The decision concerning the size of each programmable logic block is made by determining the desired trade-off between power savings, layout area overhead of the switch elements and the control circuit, and speed penalty. In a FPGA, each programmable logic block may be selected to include one or more configuration logic blocks (CLBs), input/output blocks (IOBs), and/or other resources of the FPGA (such as block RAM, processors, multipliers, adders, transceivers).Another way to disable a programmable logic block is by scaling down the local supply voltage to the block as low as possible, which dramatically reduces the static power consumption of the block. To scale down the local supply voltage in this manner, each independently controlled programmable logic block is powered by a separate switching regulator.FIG. 5 is a block diagram of a PLD 500 that implements switching regulators in accordance with one embodiment of the present invention. Similar elements in FIGS. 3 and 5 are labeled with similar reference numbers. Thus, PLD 500 includes programmable logic blocks 301-304 and VDD voltage supply 305. In addition, PLD 500 includes switching regulators 501-504, which are coupled between blocks 301-304, respectively, and VDD voltage supply 305. Switching regulators 501-504 are controlled by control circuits 511-514, respectively. In the described embodiment, switching regulators 501-504 reside on the same chip as blocks 301-304. However, in other embodiments, these switching regulators can be located external to the chip containing blocks 301-304. Switching regulators 501-504 can be programmably tuned to provide the desired supply voltages to the associated programmable logic blocks 301-304. For example, switching regulator 501 can provide a full VDD supply voltage to programmable logic block 301 when this block is used and active. However, switching regulator 501 can further be controlled to provide a reduced voltage (e.g., some percentage of the VDD supply voltage) to programmable logic block 301 when this block is unused or inactive. This reduced voltage may be predetermined (by design or via testing) depending on the desired circuit behavior. For example, this reduced voltage may be the minimum voltage required to maintain the state of the associated blocks. The static power consumption of block 301 is significantly reduced when the supplied voltage is reduced in this manner.Switching regulators 501-504 are controlled in response to the configuration data values C1-C4 stored in configuration memory cells 511-514, respectively, and the user control signals U1-U4 provided on user control terminals 521-524, respectively. A configuration data value (e.g., C1) having an activated state will cause the associated switching regulator (e.g., switching regulator 501) to provide a reduced voltage to the associated programmable logic block (e.g., block 301). Similarly, a user control signal (e.g., U2) having an activated state will cause the associated switching regulator (e.g., switching regulator 502) to provide a reduced voltage to the associated programmable logic block (e.g., block 502). A configuration data value (e.g., C3) and an associated user control signal (e.g., U3) both having have deactivated states will cause the associated switching regulator (e.g., switching regulator 503) to provide the full VDD supply voltage to the associated programmable logic block (e.g., block 503).In accordance with one embodiment, configuration data values C1-C4 may be selected at design time, such that reduced voltages are subsequently applied to unused blocks during run time. User control signals U1-U4 may be selected during run time, such that reduced voltages are dynamically applied to inactive blocks at run time. Techniques for distributing multiple programmable down-converted voltages using on-chip switching voltage regulators are described in more detail in U.S. patent application Ser. No. 10/606,619, "Integrated Circuit with High-Voltage, Low-Current Power Supply Distribution and Methods of Using the Same" by Bernard J. New, et al., which is hereby incorporated by reference.In the embodiment of FIG. 5, the granularity of the voltage scaled programmable logic blocks 301-304 should be fairly large because the overhead associated with switching regulators 501-504 is significant. In an FPGA, each programmable logic block 301-304 would most likely be divided into several clusters of configuration logic blocks (CLBs). The exact size of each programmable logic block may be determined by the desired trade-off among power savings, layout area overhead of the switching regulators, and the speed penalty.Although the invention has been described in connection with several embodiments, it is understood that this invention is not limited to the embodiments disclosed, but is capable of various modifications, which would be apparent to a person skilled in the art. For example, although the described embodiments included four programmable logic blocks, it is understood that other numbers of blocks can be used in other embodiments. Thus, the invention is limited only by the following claims. |
In described examples, an example apparatus (206) includes a data handler (302) having a first input to receive object data and a first output to output an object notation key -value pair for the object data; a string processor (304) having a second input coupled to the first output and a second output to convey the object notation key-value pair without string literals; and a hashing and encryption handler (306) having a third input coupled to the second output and a third output to convey the key-value pair signed with a private key, to convey the key-value pair encrypted with a public key, and to convey an indication that the encrypted key-value pair is encrypted in a key of the encrypted key -value pair. |
CLAIMSWhat is claimed is:1. Apparatus comprising:a data handler having a first input to receive object data and a first output to output an object notation key -value pair for the object data;a string processor having a second input coupled to the first output and a second output to convey the object notation key-value pair without string literals; anda hashing and encryption handler having a third input coupled to the second output and a third output to convey the key-value pair signed with a private key, to convey the key-value pair encrypted with a public key, and to convey an indication that the encrypted key -value pair is encrypted in a key of the encrypted key -value pair.2. The apparatus of claim 1, wherein the third output is to convey the encrypted key -value pair with a hash value of the encrypted key -value pair inserted in the key of the encrypted key- value-pair.3. The apparatus of claim 2, wherein the third output is to convey an indication that the encrypted key -value pair is hashed in the key of the key -value pair.4. The apparatus of claim 1, wherein the third output is to convey an index value identifying an encryption cipher for encrypting the key -value pair in the key of the encrypted key -value pair.5. The apparatus of claim 1, wherein the third output is to convey the encrypted key -value pair as a string value.6. The apparatus of claim 1 , wherein the third output is to convey an identification of a cipher for encrypting the key- value pair.7. The apparatus of claim 1, wherein the third output is to convey an identification of the public key.8. The apparatus of claim 1, further comprising a compression handler having a fourth input coupled to the third output and a fourth output to convey the encrypted key-value pair as compressed.9. The apparatus of claim 8, wherein the fourth output is to convey an indication that the compressed key -value pair is compressed in a key of the compressed key -value pair.10. The apparatus of claim 8, wherein the fourth output is to convey an identification of a compression algorithm for compressing the encrypted key-value pair in a value of the compressed key-value pair.1 1. The apparatus of claim 1, further comprising a serialization processor having a fifth input coupled to the fourth output and a fifth output to convey the compressed key-value pair as serialized.12. The apparatus of claim 1 1, wherein the fifth output is to convey an indication that the serialized key -value is serialized in a key of the serialized key -value pair.13. The apparatus of claim 1, wherein the data handler includes a sixth output to convey an object notation file including the serialized key -value pair.14. Apparatus comprising:a data handler to generate an object notation key -value pair for a data object; and a hashing and encryption handler to sign the key-value pair with a private key, encrypt the signed key- value pair with a public key, and insert in an object notation file an indication that the key -value pair is encrypted in a key of the key -value pair.15. The apparatus of claim 14, wherein the hashing and encryption handler is to determine a hash value of the encrypted key -value pair.16. The apparatus of claim 15, wherein the hashing and encryption handler is to insert the hash value in the key.17. The apparatus of claim 16, wherein the hashing and encryption handler is to insert an indication that the key -value pair is hashed in the key.18. The apparatus of claim 14, wherein the hashing and encryption handler is to: determine that the object notation file includes multiple encryption ciphers; and insert an index value identifying an encryption cipher for encrypting the key- value pair in the key.19. A method of generating an object notation file, the method comprising:generating an object notation key-value pair for a data object;signing the key-value pair with a private key;encrypting the signed key-value pair with a public key; andinserting in an object notation file an indication that the key -value pair is encrypted in a key of the key-value pair.20. The method of claim 19, further comprising determining a hash value of the encrypted key -value pair.21. The method of claim 20, further comprising inserting the hash value in the key.22. The method of claim 21, further comprising inserting an indication that the key -value pair is hashed in the key.23. The method of claim 19, further comprising: determining that the object notation file includes multiple encryption ciphers; and inserting an index value identifying an encryption cipher for encrypting the key -value pair in the key. |
METHODS AND APPARATUS TO PROVIDE EXTENDED OBJECT NOTATION DATA[0001] This disclosure relates generally to object notation data, and more particularly to methods and apparatus to provide extended object notation data.BACKGROUND[0002] In network communications (e.g., Internet communications), it is often beneficial to use a communication standard that uses human-readable text. Such communication standards are often easier for programmers to understand and may be more flexible than application specific binary formats. One example communication standard that uses human-readable text is JavaScript Object Notation (JSON). JSON is well-suited for Internet communications because of its close ties to JavaScript, which is supported out-of-the-box by many Internet browsers and other applications.SUMMARY[0003] In described examples, a data handler has a first input to receive object data and a first output to output an object notation key-value pair for the object data. A string processor has a second input coupled to the first output and a second output to convey the object notation key -value pair without string literals. A hashing and encryption handler has a third input coupled to the second output and a third output to convey the key-value pair signed with a private key, to convey the key-value pair encrypted with a public key, and to convey an indication that the encrypted key-value pair is encrypted in a key of the encrypted key -value pair.BRIEF DESCRIPTION OF THE DRAWINGS[0004] FIG. 1 is a block diagram of an example environment in which example methods and apparatus disclosed herein may be implemented to generate and/or parse xJSON and/or any other human-readable object notation data files.[0005] FIG. 2 is a block diagram of an example implementation of the example xJSON handler of FIG. 1.[0006] FIG. 3 is a block diagram of an example implementation of the example generator of FIG. 2.[0007] FIG. 4 is a block diagram of an example implementation of the example parser of FIG. 2. [0008] FIGS. 5-8 are flowcharts representative of an example computer readable instructions that may be performed to generate extended JSON data.[0009] FIGS. 9-11 are flowcharts representative of an example computer readable instructions that which may be performed to parse extended JSON data.[0010] FIG. 12 is a block diagram of an example processor platform structured to execute the instructions of FIGS. 5-1 1 to implement the generator and/or the example parser of FIGS.2-4.DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS[0011] The Internet of Things (IoT) refers to the concept of joining a wide range of devices to the Internet. The "things" may be any type of device or system, which often includes many devices that have previously not included circuitry capable of communicating on a network such as the Internet (e.g., consumer appliances, automobiles, biomedical devices, power outlets, thermostats and/or other environmental sensors). For example, a coffee maker may include an embedded computing device that allows the coffee maker to be uniquely identified on the Internet and allows remote control and monitoring of the example coffee maker via other Internet connected devices. Many IoT devices include low-cost and/or low-power computing devices to reduce the cost and physical space needed to add IoT functionality.[0012] While JSON and other standards using human-readable text for storing and transmitting data objects (e.g., Extensible Markup Language (XML), Yet Another Markup Language (YAML)) (collectively object notation data) are well-suited for use with devices communicating on the Internet, example methods and apparatus disclosed in this application provide extensions to such human-readable formats to facilitate the use of the human-readable protocols with limited-resource devices such as IoT devices. This is advantageous because the disclosed methods and apparatus facilitate use of the desirable object notation data formats with IoT devices and/or any other device that has limited computing resources and/or communicates with many diverse devices.[0013] While the extensions disclosed herein are well-suited for use with IoT devices, the extensions are not limited to use with and/or by IoT devices. Examples disclosed herein are described with reference to an extended JSON, which is referred to herein as xJSON for consistency. Alternatively, the extended JSON may be used with any other content type name and/or the extensions may be used with an extended version of any other protocol or standard. The methods and apparatus disclosed herein at not limited to extending JSON. Rather, the extensions may be used with any type of human-readable based protocol for storing and transmitting objects.[0014] In JSON, objects are denoted by an array of key -value pairs delimited with opening and closing curly brackets. A key denotes a property of the object and the value identifies the value for that property. Keys and values are separated by a colon. For example, a person object in JSON may be stored in a file as:{"firsfName":"John","lastName":"Smith","email":"[email protected]","password":"secretPasswordl23)"}In the above example, flrstName, lastName, email, and password are keys and John, Smith, [email protected], and secretPas sword 123 are values. The keys in the object may be referred to as names and may correlate with variables that store the values when the object is stored in an application (e.g., a person object in a JavaScript application). Thus, the JSON object provides a way to represent an object stored in binary or any other format in a human readable format.[0015] The example extensions described herein include data packing, object serialization, and hashing/security.[0016] As used herein, data packing refers to applying a compression algorithm to keys and/or values (e.g., compressing the keys and/or value using GNU Zip (gzip)). When a xJSON file is received, packed data in the xJSON file may be identified and decompressed.[0017] As used herein, object serialization refers to converting keys and/or values to a binary value(s). In some examples, the binary value(s) are then converted to a text format (e.g., using Base64 binary-to-text encoding). When a xJSON file is received, serialized data in the xJSON file may be detected and deserialized/unmarshalled.[0018] In some examples, hashing/security operations are performed by generating a hash for a value of a key-value pair and inserting the hash into the key. The hash can be used for validating the contents of the value by comparing the hash stored in the key with a hash generated for the value of the key-value pair. The hash may additionally or alternatively be used for searching for data contained in values. For example, a search parameter can be hashed and the hash for the search parameter can be compared with the hashes stored in keys to identify a match and, therefore, a value of a key-value pair that matches the search parameter. Additionally or alternatively, hashing/security may also include encrypting keys and/or values and replacing the unencrypted keys and/or values with the encrypted data. When a xJSON file is received, encrypted data in the xJSON file may be detected and decrypted.[0019] In example methods and apparatus disclosed herein, in applying the disclosed extensions to JSON, xJSON capable devices insert a qualifier in a key and/or value when the key and/or the value have been generated and/or modified based on xJSON extensions. The qualifier indicates to other xJSON capable devices that the xJSON extension has been applied. For example, a key may be modified by adding brackets to the end of the key and inserting a qualifier (e.g., a qualifier indicating a particular extension that has been applied) in the brackets. For example, if an extension associated with the character "x" is applied to key -value pairs in the person object shown in the previous paragraph, the xJSON representation may be:{"firsfName[x]":"John","lastName[x]":"Smith","email[x] ": "j ohn. [email protected]""password[x]":"secretPasswordl23)"}The qualifier may alternatively be inserted between brackets added to the value of the key-value pair (e.g., "firstName":"John[x]") and/or qualifier(s) may be added to both the key and the value (e.g., "firsfName[x]":"John[y]", "firstName[x]": {x; "v" : "John"}). Alternatively, delimiters other than brackets may be used for separating the identifier from the key and/or value (e.g., curly braces, single quotation marks, quotation marks, asterisks).[0020] For consistency the example disclosed herein are described with reference to xJSON identifiers inserted between brackets in the key of key -value pairs. However, this disclosure is not limited to a particular format for the insertion of the identifiers and any other format, including those described above, may be used.[0021] The insertion of the identifier in the key name and/or value ensures that the xJSON representation can still be processed by a device that supports JSON but does not support xJSON (e.g., in some examples, using the xJSON techniques will not prevent devices that use JSON but do not support xJSON from parsing the xJSON file because the xJSON identifiers are inserted in a manner that is consistent with the JSON grammar). Accordingly, the use of xJSON will not cause devices that support JSON, but not xJSON, to fail during parsing of an xJSON file. Rather, these non-xJSON devices will continue to operate, but without an understanding of the extensions. Such an approach enhances the ability for xJSON capable devices to operate in an environment in which some devices do not support xJSON.[0022] An additional advantage of inserting the identifier in the key as disclosed herein allows xJSON extensions to be applied on a selective basis. For example, an xJSON extension may be applied to an entire file, may be selectively applied to one or more objects in a file, or may be selectively applied to one or more keys-value pairs in a file. Thus, when the xJSON file is being processed, key-value pairs that include an xJSON identifier in the key can be processed according to the extension and key-value pairs that do not include an xJSON identifier in the key can be processed using standard JSON processing. Furthermore, different ones of the extensions can be applied to subsets of the key -value pairs in a file. For example, in the foregoing example: (a) "person" object, the firstName, lastName, and email key-value pairs may be processed to insert a hash value and an xJSON identifier for hashing in the corresponding keys; (b) the password key-value pair may be encrypted and hashed; and (c) an encryption identifier, a hash identifier, and a hash value may be inserted in the corresponding key.[0023] FIG. 1 is a block diagram of an example environment 100 in which example methods and apparatus disclosed herein may be implemented to generate and/or parse xJSON and/or any other human-readable object notation data files. The example environment includes an example web service 102 to convey an example xJSON data 103 via an example network 104 to an example first device 106 and an example second device 108. As used herein, the phrase "in communication," including variances thereof, encompasses direct communication and/or indirect communication through one or more intermediary components and does not require direct physical (e.g., wired) communication and/or constant communication, but rather additionally includes selective communication at periodic or aperiodic intervals, as well as one-time events.[0024] In the illustrated example, the example web service 102, the example first device 106, and the example second device 108 exchange data using JSON data files. According to the illustrated example, the example web service 102 and the example first device 106 are xJSON capable in that they include an xJSON handler 1 10 to parse and/or generate files that are based on at least one of the extensions associated with xJSON disclosed herein. According to the illustrated example, the example second device 108 is not xJSON capable in that the example second device 108 may parse and/or generate JSON files but does not include the xJSON handler 110 for parsing and/or generating xJSON files with the extensions associated with xJSON disclosed herein. As disclosed herein, while the example second device 108 is not capable of using the extensions related to xJSON, the example xJSON data 103 of the illustrated example that is output by the example web service 102 and/or the example first device 106 may be successfully parsed (e.g., parsed without causing a parsing error).[0025] In the illustrated example, the web service 102 is a server computer for serving information on the Internet. Alternatively, the web service 102 may be any type of device with which a device connected to the example network 104 may communicate. The example web service 102 sends the example xJSON data 103 to the example first device 106 and/or the example second device 108. The example web service 102 may also receive xJSON data from the example first device 106 and/or JSON data from the example second device 108. Alternatively, the example web service 102 may only be capable of receiving data (e.g., xJSON data and/or JSON data) or may only be able of sending data (e.g., the xJSON data 103 and/or JSON data).[0026] The network 104 of the illustrated example of FIG. 1 is the Internet. Alternatively, the network 104 may be any type of and/or combination of a local area network, a wide area network, a wired network, a wireless network, a private network, and/or a public network. The example network 104 communicatively couples the example web service 102 with the example first device 106 and the example second device 108.[0027] The first device 106 of the illustrated example of FIG. 1 is an IoT device that includes the example xJSON handler 110 to parse and/or generate the example xJSON data 103. For example, the first device 106 may be a network-enabled microprocessor controller. For example, the first device 106 may be the CC3100 SimpleLink™ Wi-Fi® and Internet-of-Things Solution for MCU Applications or the CC3200 SimpleLink™ Wi-Fi® and Internet-of-Things Solution, a Single-Chip Wireless MCU from Texas Instruments® and/or a device that includes the CC3100 or the CC3200. Alternatively, the first device 106 may be any other device in which it is desirable to use JSON data.[0028] The second device 108 of the illustrated example of FIG. 1 may be any device in which it is desirable to use JSON data. The second device 108 is included in the example of FIG. 1 to illustrate that devices that support xJSON extensions (e.g., the example web service 102 and the example first device 106) and devices that do not support xJSON extensions may be connected to the same network and may communicate with each other. For example, when xJSON extensions are implemented in a manner that does not run afoul of the JSON grammar, JSON files (e.g., the example xJSON data 103) that include at least some key-value pairs that include xJSON extensions (e.g., xJSON files) can be parsed by devices that do not support xJSON without causing parsing errors. Likewise, devices that support xJSON extensions are able to process JSON files.[0029] The example xJSON handler 110 parses and/or generates xJSON files (e.g., files that are generated according to the JSON protocol and include at least one key-value pair that includes one of the xJSON extensions disclosed herein such as the example xJSON data 103). An example implementation of the xJSON handler 110 is described in further detail in conjunction with FIG. 2. While FIG. 1 illustrates that the example web service 102 and the example first device 106 include the same xJSON handler 110, in other examples, devices may include different xJSON handlers (e.g., the xJSON handler 1 10 of the example web service 102 may be implemented differently than the xJSON handler 1 10 of the example first device 106).[0030] Using the xJSON handler 1 10 enables a device to generate xJSON data and parse xJSON data (e.g., object notation data that includes the extensions disclosed herein). The xJSON handler 110 of the illustrated example facilitates the use of data representations that are not supported by existing object notation protocols. For example, the xJSON handler 1 10 may support the use of customized primitives (e.g., primitives that are well-suited for use with embedded devices such as IoT devices). For example, a binary typed literal may be input as "ObAAAA" or "0BAAAA" where "b" and "B" indicate that the value is a binary literal. In another example, a hexadecimal typed literal may be input as "OxAAAA" or 0XAAAA" where "x" and "X" indicate that the value is a hexadecimal literal. Hardware based literals may also be supported by the xJSON handler 110. For example, an identifier may be added to a key and/or a value to indicate a literal of a volatile type, a literal for a hardware signal type (input, output, both), and a tri-state value for a signal object. In other words, the flexibility of using identifiers appended to, inserted in, replacing portions of keys and/or values, allows the xJSON handler 110 to indicate information about keys and/or values including indicating the state (e.g., encrypted, compressed, serialized) and/or the purpose of the value (e.g., a literal for a hardware signal type).[0031] While the example environment 100 of FIG. 1 includes the example web service 102, the example first device 106, and the example second device 108, any number and/or types of devices may be used. For example, an environment might include any combination of two devices and/or web services, three devices and/or web services, four devices and/or web services, hundreds of devices and/or web services.[0032] FIG. 2 is a block diagram of an example implementation of the example xJSON handler 1 10 of FIG. 1. The example xJSON handler 1 10 of FIG. 2 includes an example interface 202 to send and/or receive example object notation data 200 (e.g., the example xJSON data 103), an example parser 204 to parse the example object notation data 200 to output example object data 210, an example generator 206 to generate example object notation data 200 from example object data 212, and an example JavaScript Interpreter 208. While the xJSON handler 110 of FIG. 2 is described with reference to the example first device 106, the xJSON handler 1 10 of FIG. 2 may be implemented in another device (e.g., the example web service 102).[0033] The example interface 202 of the illustrated example is a network interface 202 that sends and/or receives the object notation data 200 (e.g., the example xJSON data 103) to and/or from the network 104 and/or from one or more other components of the device that includes a xJSON handler 1 10 (e.g., components of the example web service 102 and/or components of the example first device 106). For example, the xJSON handler 1 10 of the example first device 106 may receive the example xJSON data 103 retrieved from the example web service 102. The example interface 202 transmits the object notation data 200 received from the network 104 to the example parser 204. The example interface 202 transmits the object notation data 200 received from the example generator 206 to a desired destination for the object notation data 200. For example, the interface 202 for the xJSON handler 110 of the example first device 106 may transmit the example object notation data 200 generated by the example generator 206 to the example web service 102.[0034] The parser 204 of the illustrated example receives the object notation data 200 and parses the data to extract the objects represented by the object notation data 200. The parser 204 transmits extracted object data 210 to the example JavaScript interpreter 208. For example, returning to the example person object discussed above, the parser 204 retrieves the elements of the person object from the key-value pairs included in the xJSON data (e.g., the flrstName, the lastName, the email, and the password) and builds a JavaScript person object that is transmitted to the JavaScript interpreter 208. The example parser 204 includes functionality for parsing xJSON data that includes one or more of data packing, object serialization, and/or hashing/security extensions. An example implementation of the parser 204 is described in further detail in conjunction with the block diagram of FIG. 4.[0035] The example generator 206 of the illustrated example builds object notation data 200 (e.g., an xJSON file) file to represent object data 212 received from the JavaScript interpreter 208. For example, the example generator 206 may build the example person object in xJSON based on a person object stored in the JavaScript interpreter. The object notation data 200 generated by the generator 206 is transmitted to the interface 202 for transmission to another device (e.g., the web service 102). Alternatively, the xJSON handler 110 and/or a device that includes the xJSON handler 110 may store the object notation data 200 (e.g., for later transmission and/or processing). The example generator 206 is described in further detail in conjunction with the block diagram of FIG. 3.[0036] The JavaScript interpreter 208 of the illustrated example is a software run-time environment that operates according to the JavaScript programming language to execute JavaScript applications or any other JavaScript instructions. The example JavaScript interpreter 208 of the illustrated example stores object data 210 and/or 212 (e.g., the above-described person object). While the JavaScript interpreter 208 of the illustrated example uses JavaScript, the JavaScript interpreter 208 may alternatively be any other run-time environment that can receive objects output by the example parser 204 and/or transmit objects to the example generator 206.[0037] FIG. 3 is a block diagram of an example implementation of the example generator 206 of FIG. 2. The example generator 206 of FIG. 3 includes an example data handler 302, an example string processor 304, an example hashing and encryption handler 306, an example compression handler 308, and an example serialization processor 310.[0038] The example data handler 302 receives object data 312 (e.g., the example object data 212 received from the example JavaScript interpreter 208) and generates an object notation data 314 populated with xJSON key-value pairs representative of the object data 312. For example, the example data handler 302 may provide an interface (e.g., an Application Programming Interface (API)) through which a request for generation of an xJSON file may be received. The example data handler 302 determines if the request indicates that an xJSON file is to be generated or if a JSON file that includes xJSON extensions is to be generated. For example, as described in detail herein, if the object notation data 314 does not need to be compatible with devices that do not support xJSON, the object notation data 301 output by the generator 206 may be formatted to be processed by an xJSON capable device (e.g., the quotation marks that surround keys and values according to standard JSON grammar can be excluded when the example parser 204 will parse the file and implicitly evaluate the data without the presence of the quotation marks). The example data handler 302 of the illustrated example records the content type for the xJSON file, creates a key -value pair 316 (a single key -value pair is discussed, but multiple key-value pairs may be used) for the object data 312 (e.g., by creating a key named for the variable of the object data 312 and creating a corresponding value for the value of the variable), and sends and the key-value pair 316 to the example string processor 304. [0039] The example string processor 304 of FIG. 3 determines if the content type for the object notation data 314 is to be xJSON or standard JSON. If the object notation data 314 is intended to be parsed by xJSON capable devices and non-xJSON capable devices, the string processor 304 inserts quotation marks around the keys and the values in the key-value pair 316. If compatibility with non-xJSON capable devices was not desired, the string processor 304 does not insert the quotation marks as the strings of the example key-value pair 316 will be implicitly recognized by xJSON capable parsers (e.g., the example parser 204 of FIG. 2). For example, the following example person object may be generated when non-xJSON compatibility is desired:{"firsfName": "John","lasfName": "Smith","email": "[email protected]","password": "secretPasswordl23)"} ·Alternatively, the following example person object may be generated when non-xJSON compatibility is not desired and/or needed:{firsfName: John,lasfName: Smith,email: [email protected],password: secretPasswordl23)} ·[0040] The example string processor 304 outputs a processed key -value pair 318 to the hashing and example encryption handler 306.[0041] The example hashing and encryption handler 306 of FIG. 3 receives the processed key -value pair 318 and determines if hashing and/or encryption of the processed key-value pair 318 is requested. For example, the request to generate the object notation data 314 may identify one or more key-value pairs and/or objects for which hashing and/or encryption is requested. For example, if an object includes a username field and a password field, hashing may be requested for all fields but encryption may be requested for only the password field. Alternatively, the hashing and encryption handler 306 may automatically determine that hashing and/or encryption is desired (e.g., when a key-value pair is identified as sensitive data such as a password field). [0042] When hashing and/or encryption are requested, the hashing and encryption handler 306 determines a desired cipher (e.g., an encryption cipher, a hashing cipher, a combination of an encryption cipher and a hashing cipher) to be used. For example, the request to perform hashing and/or encryption may identify a cipher and/or the example hashing and encryption handler 306 may use a default cipher.[0043] To certify the authenticity of the processed key- value pair 318 to other devices, the hashing and encryption handler 306 of this example signs the processed key-value pair 318 using a private key of the content-owner (e.g., the owner of the data for which the xJSON file is being generated) to generate an encrypted and/or hashed key-value pair 320. In such examples, the encrypted and/or hashed key -value pair 320 can be verified by others with access to the public key corresponding to the private key.[0044] When the cipher includes encryption (e.g., as opposed to only including data signing), the example hashing and encryption handler 306 of FIG. 3 encrypts the encrypted and/or hashed key-value pair 320 for any key-value pairs for which encryption was requested. The example hashing and encryption handler 306 encrypts the encrypted and/or hashed key-value pair 320 using a public key corresponding to a private key that may be used for decrypting the encrypted and/or hashed key-value pair 320. For example, the parser 204 of FIGS. 2 and/or 4 may store a private key that may be used for decrypting data encrypted using a corresponding public key.[0045] The example hashing and encryption handler 306 then hashes the encrypted and/or hashed key-value pair 320 (e.g., key-value pairs are hashed when hashing was requested). The example hashing and encryption handler 306 hashes the encrypted value for any encrypted data (as opposed to the original data prior to encryption).[0046] The hashing and encryption handler 306 of the illustrated example inserts cipher keys data into the encrypted and/or hashed key-value pair 320 (e.g., inserts the data in an xJSON file) that is being generated. The cipher keys data identifies one or more keys that were used in encrypting the encrypted and/or hashed key-value pair 320. For example, the cipher keys data may include an identifier for a certificate for which a public key was used for encrypting the encrypted and/or hashed key-value pair 320 to enable the example parser 204 to locate the corresponding private key for use in decrypting the encrypted and/or hashed key-value pair 320. The cipher keys may additionally include an identifier for a certificate for which a private key was used for signing the data. Where multiple keys are used in a single xJSON file, each key may be identified with a sequential number in the cipher keys data. In addition to the identifier for the encryption key, the cipher keys data of the illustrated example also identifies the particular cipher algorithm used for the hashing and/or encryption and any parameters corresponding to the cipher algorithm.[0047] In some examples, the hashing and encryption handler 306 also inserts the public key certificate(s) that were used for encrypting the encrypted and/or hashed key -value pair 320 and/or that may be used for validating the signing data in the encrypted and/or hashed key-value pair 320.[0048] The example hashing and encryption handler 306 of the example of FIG. 3 inserts in the corresponding keys of the encrypted and/or hashed key -value pair 320 an indication of the hashing and/or an indication of the encryption. For example, for a key that has its value hashed, the example hashing and encryption handler 306 inserts '[#ΗΗΗΗ]' in the key, where '#' is a hash identifier and ΉΗΗΗ' is the value resulting from the hashing. The example hashing and encryption handler 306 inserts ' [sX#HHHH]' in the key for the encrypted and/or hashed key-value pair 320, where the 's' is a hash identifier, the 'X' is an index value of the key used for the signing and/or encryption where there are multiple keys, the '#' is a hash identifier, and ΉΗΗΗ' is the value resulting from the hashing.[0049] For example, the following is a person object before hashing and encryption:{"name": "John Smith","email": "[email protected]","passwords": ["12345678", "my-most-secret-passphrase", "abcdefg"]}A hashed and encrypted person object that may be generated by the example hashing and encryption handler 306 of FIG. 3 for the person object above is:"keys": [ { "id": "8707b206-669f-4eb8-b519-643cl lc24e2a","cipher": "ECDSA_SHA256","param": { "type": "named-curve","curve": "prime256vl" }}];"cert": [.. public key certificates if necessary .. ]"name[#b417]": "John Smith", "email[#7c3d]": "[email protected]","passwords [s#l 81 f] " : "MzAONT AyMj EwMGQzNmYzY2EyZj JmNj c4 YTRm MGEyNWVjODdhMDklMjA4YTJhMjViZWRlZTRiYzFhZDdkNzIwNzUl NzMzMjYyNGEwMjIwNGYwOWQxNzYlY2MyMjkxNTM0ZDE5YWQ2N DE3 Zj cxMmU 1 ZDZlMDliZD YOODcyNT A5 OD A5 MTBiZmU 1 NDc4MmZk NTZkYjcJM2UJMDYJMTgJYzYJMjYJMmEJ0WIJZjIJ mYJ TcJ TMJ zYJYTIJYmQNCmVlCTE0CTUyCWU4CWM3 CTE 1 C WU4CWQ3 CWQwC TQ0CTFhCTEzCWUwCWFlCTJlCTNhDQpjMQk3MQk5OQkxNwljNQk5O Ak 1 Ygk4YwkxNQk4NAk 1 MAk5NgkzNwkOYgkzYQk3 YQ=="}[0050] The example hashing and encryption handler 306 outputs the encrypted and/or hashed key-value pair 320 to the example compression handler 308.[0051] The compression handler 308 of the illustrated example determines if compression of the encrypted and/or hashed key -value pair 320 is requested. For example, the request to generate the xJSON file may identify one or more key -value pairs and/or objects for which compression is requested. Alternatively, the compression handler 308 may automatically determine that compression is desired when a key-value pair and/or an object exceeds a threshold size. When compression is requested, the compression handler 308 compresses (e.g., zips) the encrypted and/or hashed key-value pair 320 (e.g., the requested key-value pair and/or the requested object) to generate the example compressed key-value pair 322. The example compression handler 308 of the illustrated example inserts a key for the compressed data in the example compressed key-value pair 322. For example, the compression handler 308 may insert a generated key for the compressed data (e.g., a key such as _oX, where X is a number that increments for each set of generated data inserted in the generated xJSON file) to ensure that the key for each set of generated data is unique. The example compression handler 308 of FIG. 3 also inserts a compression identifier (e.g., [z]) in the example compressed key-value pair 308 to indicate to the example parser 204 that the compressed key-value pair 308 is compressed. In the illustrated example, the example compression handler 308 inserts a value for the key that identifies metadata for the compression and includes the compressed data. For example, the compression handler 308 may use gzip for compressing the encrypted and/or hashed key-value pair 320 and may insert metadata identifying the algorithm and the look-up table for the compression algorithm. For example, the result of compressing the person object may be:{ "_ol [z]": {'alg' : 'gzip','lut' : '+srRo'os',Ό': χοεΕΚΙΜμΕΕΪΕ8(ϊί,Ε°ΝίΜΠ}}[0052] The example compression handler 308 outputs the compressed key-value pair 322 to the example serialization processor 310.[0053] The example serialization processor 310 of the illustrated example determines if serialization of the example compressed key-value pair 322 is requested. For example, the request to generate an xJSON file may identify one or more key -value pairs and/or objects for which serialization is requested. When serialization is requested, the example serialization processor 310 serializes the requested compressed key-value pair (e.g., the requested key-value pair and/or the requested object) to generate an example serialized key-value pair 324. The serialization processor 310 of the illustrated example inserts a key for the serialized key -value pair 324. For example, the serialization processor 310 may insert a generated key for the serialized key-value pair 324 (e.g., _oX, where X is a number that increments for each set of generated data in the generated xJSON object notation data 314) to ensure that the key for each set of generated data is unique. The example serialization processor 310 also inserts a serialization identifier (e.g., [b]) in the key to indicate to the example parser 204 that the serialized key-value pair is serialized. The example serialization processor 310 of FIG. 3 inserts the serialized data as the value for the key of the serialized key -value pair 324. In the illustrated example, the example serialization processor 310 converts the binary data resulting from the serializing to ASCII text using Base64 conversion. For example, the result of serializing the person object may be:{"_ol [b]":"NjTigJxuYW114oCdOiDigJxKb2huIFNtaXRo4oCdLA0K4oCcZWl haWzigJ06IOKAnGpvaG4uc21pdGhAZXhtYXBsZS5jb23igJ0NCg==}[0054] The example serialization processor 310 of the illustrated example transmits the resulting serialized key-value pair 324 to the example data handler 302 for transmission to the destination for the object notation data 314 (e.g., to the example web server 102 via the example interface 202 and the example network 104). For example, the object notation data 314ile may be transmitted via the example interface 202 to the example web service 102 for parsing by the example parser 204 in the xJSON handler 1 10 implemented in the example web service 102. The example web service 102 may then process the data objects in accordance with the operation of the example web service 102. Alternatively, the object notation data 314 may be transmitted to any other desired location. In some examples, the data handler 302 includes references to invoke functions in the object notation data 314. For example, a function referenced as "funcl(argl, ...argN)" will cause a parser (e.g., the example parser 204 to invoke the function identified as "fund" when parsing the object notation data. Alternatively, a function referenced as "@uri#funcl(argl,...argN)" will cause the parser (e.g., the example parser 204) to cause "fund" to be invoked by the server listening at the location "uri."[0055] FIG. 4 is a block diagram of an example implementation of the parser 204 of FIG. 2. The example parser 204 of FIG. 4 includes a data handler 402, a string processor 404, a decryption handler 410, a deserialization processor 406, and a decompression handler 408.[0056] The data handler 402 of the example of FIG. 4 receives an example object notation data 412 (e.g., an xJSON file) file to be processed. The example data handler 402 extracts the key -value pairs and/or objects from the object notation data 412 and transmits them to the example string processor 404. For example, the data handler 402 may extract one key-value pair at a time and transmit the key-value pair as the example key -value pair 414 for processing by the example string processor 404, the example deserialization processor 406, the example decompression handler 408, and/or the example decryption handler 410. Alternatively, the data handler 402 may extract multiple key-value pairs for processing by the example string processor 404, the example deserialization processor 406, and the example decompression handler 408, and the example decryption handler 410.[0057] Following processing by one or more of the example string processor 404, the example deserialization processor 406, and the example decompression handler 408, and the example decryption handler 410, the example data handler 402 receives the data object(s) (e.g., the example decrypted pair 420) and transmits example object data 422 containing the objects extracted from the example object notation data 412 to the example JavaScript interpreter 208.[0058] The example string processor 404 of FIG. 4 receives the example key -value pair(s) 414 extracted from the object notation data 412 (e.g., a xJSON file) by the data handler 402 and automatically determines if the example key -value pair 414 is an xJSON file or a JSON file based on the presence or lack of string literals (e.g., quotation marks surrounding the keys and values). According to the illustrated example, the string processor 404 determines that the example key -value pair 414 is associated with file is a JSON file when the string literals are present and determines that the example key-value pair 414 is an xJSON file when the string literals are not present. The example string processor 404 additionally removes the string literals (e.g., the quotation marks) when they are present to reduce the data size. The example string processor 404 transmits processed key -value pair(s) 416 to the example deserialization processor 406.[0059] The example deserialization processor 406 of FIG. 4 determines if the processed key -value pair(s) 416 include a serialization identifier (e.g., a key that has been modified to include an indication that serialization has been performed such as the letter "b" in brackets). When the example deserialization processor 406 determines that the serialization identifier is included in the example processed key-value pair 416, the example deserialization processor 406 deserializes the example processed key -value pair 416. For example, serialized data may be encoded in Base64 and the example serialization processor 406 will decode the Base64 representation to retrieve the original key-value pair(s). After performing any needed deserialization, the example deserialization handler 406 of the illustrated example transmits a deserialized key-value pair(s) 417 to the example decompression handler 408.[0060] .The example decompression handler 408 determines if the example deserialized key -value pair(s) 417 include a compression identifier (e.g., a key that has been modified to include an indication that compression has been performed such as the letter "z" in brackets). When the example decompression handler 408 of this example determines that the compression identifier is included in a key of the example deserialized key -value pair 417, the example decompression handler 408 decompresses the example deserialized key-value pair 417. The example decompression handler 408 of the illustrated example retrieves the identity of the compression algorithm from metadata inserted into the value(s) of the example deserialized key-value pair 417 during compression by the example compression handler 308. For example, the metadata may include an identity of the compression algorithm (e.g., gzip), parameters for use during the compression and/or decompression (e.g., a look up table). After performing any needed decompression, the example decompression handler 408 transmits a decompressed key -value pair(s) 418 to the example decryption handler 410.[0061] The decryption handler 410 of this example determines if the decompressed key-value pair(s) 418 includes an encryption identifier (e.g., a key that has been modified to include an indication that encryption has been performed such as the letter "s" in brackets). When the decryption handler 410 of the illustrated example determines that the encryption identifier is included in a key of the example decompressed key -value pair 418, the decryption handler 410 decrypts the key-value pair. For example, the decryption handler 410 may have access to private keys installed on the device on which the xJSON handler 110 is implemented (e.g., the example first device 106). The decryption handler 410 of the illustrated example may retrieve the private key corresponding to decompressed key-value pair 418 and use the private key for decrypting the decompressed key-value pair 418. Alternatively, the decryption handler 410 may prompt a user to input a private key for performing the decryption.[0062] The example decryption handler 410 of FIG. 4 determines the appropriate private key for the decryption by analyzing the keys field inserted into the decompressed key-value pair 418 and/or the example object notation data 412. Alternatively, information identifying the keys used for encrypting and/or decrypting the decompressed key-value pair(s) 418 may be stored in any other location (e.g., information about keys used for encryption may be inserted in the key of an encrypted key-value pair). In some examples where multiple keys are used in the example object notation data 412, the encryption identifier may include an identifier for the particular one of the keys used for encrypting (and similarly for decrypting) the decompressed key -value pair 418. For example, as described above in conjunction with the hashing and encryption handler 306, the encryption identifier may be "[sX#HHHH]" where X is an index value identifying one of the keys in the keys field inserted in the object notation data 412.[0063] After performing any needed decryption, the example decryption handler 410 of the illustrated example transmits an example decrypted key-value pair(s) 420 to the data handler 402 for transmission of the example object data 422 to the example JavaScript interpreter 208.[0064] While an example manner of implementing the generator 206 of FIG. 2 is illustrated in FIG. 3, one or more of the elements, processes and/or devices illustrated in FIGS. 3 may be combined, divided, re-arranged, omitted, eliminated and/or implemented in any other way. Further, the example data handler 302, the example string processor 304, the example hashing and encryption handler 306, the example compression handler 308, the example serialization processor 310, and/or, more generally, the generator 206 of FIGS. 2 and 3 may be implemented by hardware, software, firmware and/or any combination of hardware, software and/or firmware. Thus, for example, any of the example data handler 302, the example string processor 304, the example hashing and encryption handler 306, the example compression handler 308, and/or the example serialization processor 310 could be implemented by one or more analog or digital circuit(s), logic circuits, programmable processor(s), application specific integrated circuit(s) (ASIC(s)), programmable logic device(s) (PLD(s)) and/or field programmable logic device(s) (FPLD(s)). When reading any of the apparatus or system claims of this patent to cover a purely software and/or firmware implementation, at least one of the example data handler 302, the example string processor 304, the example hashing and encryption handler 306, the example compression handler 308, and/or the example serialization processor 310 is/are hereby expressly defined to include a tangible computer readable storage device or storage disk such as a memory, a digital versatile disk (DVD), a compact disk (CD), or a Blu-ray disk storing the software and/or firmware. Further still, the xJSON handler 110 of FIG. 1 and/or the generator 206 of FIGS. 2 and/or 3 may include one or more elements, processes and/or devices in addition to, or instead of, those illustrated in FIG. 3, and/or may include more than one of any or all of the illustrated elements, processes and devices.[0065] Flowcharts representative of example machine readable instructions for implementing the example generator 206 are shown in FIGS. 5-8. In these examples, the machine readable instructions include program(s) for execution by a processor such as the processor 1212 shown in the example processor platform(s) 1200 discussed below in connection with FIG. 12. The program(s) may be embodied in software stored on a tangible computer readable storage medium such as a CD-ROM, a floppy disk, a hard drive, a digital versatile disk (DVD), a Blu-ray disk, or a memory associated with the processor(s) 1212, but the entire program(s) and/or parts thereof could alternatively be executed by a device other than the processor(s) 1212 and/or embodied in firmware or dedicated hardware. Further, although the example program(s) are described with reference to the flowcharts illustrated in FIGS. 5-8, many other methods of implementing the example generator 206 may alternatively be used. For example, the order of execution of the blocks may be changed, and/or some of the blocks described may be changed, eliminated, or combined.[0066] As mentioned above, the example processes of FIGS. 5-8 may be implemented using coded instructions (e.g., computer and/or machine readable instructions) stored on a tangible computer readable storage medium such as a hard disk drive, a flash memory, a read-only memory (ROM), a compact disk (CD), a digital versatile disk (DVD), a cache, a random-access memory (RAM) and/or any other storage device or storage disk in which information is stored for any duration (e.g., for extended time periods, permanently, for brief instances, for temporarily buffering, and/or for caching of the information). As used herein, the term tangible computer readable storage medium is expressly defined to include any type of computer readable storage device and/or storage disk and to exclude propagating signals and transmission media. As used herein, "tangible computer readable storage medium" and "tangible machine readable storage medium" are used interchangeably. Additionally or alternatively, the example processes of FIGS. 5-8 may be implemented using coded instructions (e.g., computer and/or machine readable instructions) stored on a non-transitory computer and/or machine readable medium such as a hard disk drive, a flash memory, a readonly memory, a compact disk, a digital versatile disk, a cache, a random-access memory and/or any other storage device or storage disk in which information is stored for any duration (e.g., for extended time periods, permanently, for brief instances, for temporarily buffering, and/or for caching of the information). As used herein, the term non-transitory computer readable medium is expressly defined to include any type of computer readable storage device and/or storage disk and to exclude propagating signals and transmission media.[0067] The example computer readable instructions of FIG. 5 begins when the example data handler 302 receives data and a request to generate object notation data (e.g., an xJSON file) (block 502). For example, the example data handler 302 may receive a JavaScript object from the example JavaScript interpreter 208. The example data handler 302 determines if xJSON output is requested (block 504). For example, the request to generate the xJSON file may include an indication that an xJSON specific file is requested. An xJSON specific file is a file that does not need to support parsing by devices that do not support xJSON. For example, for files that support parsing by JSON the keys and values are surrounded by quotation marks and for xJSON files that do not need to support JSON the keys and values do not need to be surrounded by quotation marks to reduce the file size. When the request indicates that the output file is to be an xJSON file type, the example data handler 302 inserts a content type identifier indicating that the file is an xJSON file type (e.g., "Content-Type: application/xjson") (block 506). When the request indicates that the output file is to be a JSON file type, the example data handler 302 inserts a content type identifier indicating that the file is a JSON file type (e.g., "Content-Type: application/json") (block 508).[0068] After the content type is set in block 506 or block 508, the example data handler 302 selects a data object (e.g., selects a first data object, a first element of a data object, a next data object) (block 510). For example, the example data handler 302 may select the firstName element of the example person object described above. The example data handler 302 then generates a key-value pair for the selected element (block 512). For example, the example data handler 302 may create a key named "firstName" and a value containing the value for the firstName element to generate the JSON key-value pair: "firsfName: John."[0069] The example string processor 304 then determines if the content type was set to xJSON for the file (block 514). If the content type was not set to xJSON (e.g., the output file is to support parsing by JSON parsers that do not support xJSON), the example string processor 304 inserts quotation marks around the key and the value in the generated key -value pair (block 516).[0070] After the string processor 304 inserts the quotation marks in block 516 or after the string processor 304 determines that the content type for the file is set to xJSON (block 514), the example hashing and encryption handler 306 determines if the key-value pair is to be hashed and/or encrypted (block 518). The example hashing and encryption handler 306 may determine that the key -value pair is to be hashed and/or encrypted when the request to generate the xJSON file indicates that the key -value pair is to be hashed and/or encrypted. Alternatively, the hashing and encryption handler 306 may automatically determine that data is to be encrypted when detecting that the key-value pair contains sensitive data (e.g., when the key-value pair is a password field). When the hashing and encryption handler 306 determines that the key -value pair is to be hashed and/or encrypted, the example hashing and encryption handler 306 hashes and/or encrypts the key-value value pair (block 520). Example computer readable instructions for hashing and/or encrypting the key -value pair are described in conjunction with FIG. 8.[0071] After the example hashing and encryption handler 306 determines that hashing and encryption are not requested (block 518) or the hashing and encryption handler hashes and/or encrypts the key-value pair (block 520), the example compression handler 308 determines if the key-value pair is to be compressed (block 522). The compression handler 308 may determine that the key -value pair is to be compressed when the request to generate the object notation data indicates that the key-value pair is to be compressed. Alternatively, the example compression handler 308 may determine that the key-value pair is to be compressed when the size of the value exceeds a threshold level. When the compression handler 308 determines that the key-value pair is to be compressed, the example compression handler 308 compresses the key-value value pair (block 524). Example computer readable instructions for compressing the key-value pair are described in conjunction with FIG. 6.[0072] After the example compression handler 308 determines compression is not requested (block 522) or the compression handler 308 compresses the key-value pair (block 524), the example serialization processor 310 determines if the key-value pair is to be serialized (block 526). The example serialization processor 310 may determine that the key -value pair is to be serialized when the request to generate the object notation data indicates that the key-value pair is to be serialized. When the serialization processor 310 determines that the key-value pair is to be serialized, the example serialization processor 310 serializes the key-value value pair (block 528). An example process for serializing the key-value pair is described in conjunction with FIG. 7.[0073] After performing any requested hashing and/or encrypting (block 520), compressing (block 522), and serializing (block 528), the example data handler 302 inserts the generated key-value pair in the object notation data (e.g., an xJSON file) (block 530). The example data handler 302 determines if there are additional data objects and/or elements for which key-value pairs are to be generated (block 532). When there are additional objects and/or elements for key-value pair generation, control returns to block 510 to process the next object and/or element. When there are no additional objects and/or elements for key -value pair generation the example computer readable instructions of FIG. 5 end.[0074] FIG. 6 is a flowchart of example computer readable instructions to compress a key-value pair. The example computer readable instructions of FIG. 6 may be used for implementing block 524 of FIG. 5. The example computer readable instructions of FIG. 6 begins when the example compression handler 308 determines a compression algorithm (block 602). For example, a request to compress a key-value pair may specify a compression algorithm to be used. Alternatively, the example compression handler 308 may include a default compression algorithm (e.g., the gzip compression algorithm). The example compression handler 308 then compresses the key-value pair using the determined compression algorithm (block 604). The example compression handler 308 then inserts a compression identifier in the key for the key-value pair (block 606). For example, the compression identifier may be any indication that may indicate that the key-value pair is compressed (e.g., "[z]"). For example, a compressed key-value pair may include a key placeholder and the compression identifier (e.g., "_ol [z]" where the value following the "o" is an index that is incremented for each compressed value to ensure that each key remains unique). The example compression handler 308 then inserts metadata regarding the compression in the value for the key-value pair (block 608). For example, the compression handler 308 may insert an identification of the algorithm used for compression (e.g., "alg: gzip") and parameters for the compression (e.g., a lookup table " it: +srRo'os"). The example computer readable instructions of FIG. 6 then end. For example, control may return to block 526 of FIG. 5.[0075] FIG. 7 is a flowchart of example computer readable instructions that may be executed to serialize a key -value pair. The example computer readable instructions of FIG. 7 may be used for implementing block 528 of FIG. 5. The example computer readable instructions of FIG. 7 begins when the example serialization processor 310 determines a serialized value for the value in the key-value pair (block 702). For example, the serialization processor 310 may serialize the value of the key-value pair and perform a binary to text conversion (e.g., using Base64) to store the serialized data in the object notation data. The example serialization processor 310 then modifies the key of the key-value pair to insert a serialization identifier in the key (e.g., the example serialization processor 310 may insert "[b]" in the key). The example computer readable instructions of FIG. 7 then end. For example, control may return to block 530 of FIG. 5.[0076] FIG. 8 is a flowchart of example computer readable instructions to hash and/or encrypt a key-value pair. The example computer readable instructions of FIG. 8 may be used for implementing block 520 of FIG. 5. The process of FIG. 8 begins when the example hashing and encryption handler 310 of the illustrated example determines a cipher and a key to be used (block 802). For example a request to hash and/or encrypt may include an identification of a cipher and/or a key (e.g., a private key) that is to be used. Alternatively, the hashing and encryption handler 310 may use a default cipher and/or private key. The example hashing and encryption handler 310 then packs the string to be encrypted (block 804). For example, the example hashing and encryption handler 310 packs the key-value pair by removing any quotation marks. The hashing and encryption handler 310 may perform any other packing to remove any other characters. The example hashing and encryption handler 310 then signs the key-value pair using the identified key (block 806).[0077] The example hashing and encryption handler 310 then determines if the cipher includes encryption (block 808). For example, the cipher may be a cipher that only includes hashing or may be a cipher that includes hashing and encryption. When the cipher does not include encryption, control proceeds to block 822 for hashing of the key -value pair. When the cipher includes encryption, the hashing and encryption handler 310 encrypts the signed key-value pair (block 810). The example hashing and encryption handler 310 then converts the encrypted value to a string for insertion in the xJSON file (block 812). The example hashing and encryption handler 310 transfers the encrypted value to a string using Base64 encoding.[0078] After encrypting the key -value pair (block 810), the hashing and encryption handler 310 inserts an encryption identifier (e.g., "[s]") in the key of the key-value pair (block 814). The example hashing and encryption handler 310 then inserts the metadata identifying the cipher in the xJSON file (block 816). For example, the cipher metadata may be inserted in a key-value pair with key name "keys." The example hashing and encryption handler 310 then determines if there are multiple ciphers in the keys metadata (block 818). If there are multiple ciphers in the keys metadata, the example hashing and encryption handler 310 inserts a cipher identifier in the key of the encrypted key-value pair (block 820). For example, the hashing and encryption handler 310 may insert an index corresponding to the cipher in the keys metadata (e.g., "[s2]" where the cipher is the second cipher in the keys metadata).[0079] After the hashing and encryption handler 310 has determined that the cipher does not include encryption (block 808), has determined that there are not multiple ciphers (block 818), or has inserted the identifier of the cipher in the key (block 820), the example hashing and encryption handler 310 determines a hash for the value of the key -value pair (block 822). For example, the hash may be determined using a double Pearson hashing. The example hashing and encryption handler 310 inserts the value of the hash into the key for the key-value pair (block 824). For example, the hash value may be inserted following a hashing identifier (e.g., the hashing identifier may be the hash symbol (#)). For example, the hash may be inserted as ("[#XXXX]" where XXXX is the hash value). A key for a value that is encrypted and hashed may be "[s#XXXX]" where a single cipher is present and "[s l#XXXX]" where there are multiple ciphers and the first cipher was used for the encryption.[0080] The example computer readable instructions of FIG. 8 then end. For example, control may return to block 522 of FIG. 5.[0081] While an example manner of implementing the parser 204 of FIG. 2 is illustrated in FIG. 4, one or more of the elements, processes and/or devices illustrated in FIGS. 4 may be combined, divided, re-arranged, omitted, eliminated and/or implemented in any other way. Further, the example data handler 402, the example string processor 404, the example deserialization processor 406, the example decompression handler 408, the example decryption handler 410, and/or, more generally, the parser 204 of FIGS. 2 and 4 may be implemented by hardware, software, firmware and/or any combination of hardware, software and/or firmware. Thus, for example, any of the example data handler 402, the example string processor 404, the example deserialization processor 406, the example decompression handler 408, and/or the example decryption handler 410 could be implemented by one or more analog or digital circuit(s), logic circuits, programmable processor(s), application specific integrated circuit(s) (ASIC(s)), programmable logic device(s) (PLD(s)) and/or field programmable logic device(s) (FPLD(s)). When reading any of the apparatus or system claims of this patent to cover a purely software and/or firmware implementation, at least one of the example data handler 402, the example string processor 404, the example deserialization processor 406, the example decompression handler 408, and/or the example decryption handler 410 is/are hereby expressly defined to include a tangible computer readable storage device or storage disk such as a memory, a digital versatile disk (DVD), a compact disk (CD), or a Blu-ray disk storing the software and/or firmware. Further still, the xJSON handler 1 10 of FIG. 1 and/or the parser 204 of FIGS. 2 and/or 4 may include one or more elements, processes and/or devices in addition to, or instead of, those illustrated in FIG. 4, and/or may include more than one of any or all of the illustrated elements, processes and devices.[0082] Flowcharts representative of example machine readable instructions for implementing the example parser 204 are shown in FIGS. 9-1 1. In these examples, the machine readable instructions include program(s) for execution by a processor such as the processor 1212 shown in the example processor platform(s) 1200 discussed below in connection with FIG. 12. The program(s) may be embodied in software stored on a tangible computer readable storage medium such as a CD-ROM, a floppy disk, a hard drive, a digital versatile disk (DVD), a Blu-ray disk, or a memory associated with the processor(s) 1212, but the entire program(s) and/or parts thereof could alternatively be executed by a device other than the processor(s) 1212 and/or embodied in firmware or dedicated hardware. Further, although the example program(s) are described with reference to the flowcharts illustrated in FIGS. 9-11, many other methods of implementing the example parser 204 may alternatively be used. For example, the order of execution of the blocks may be changed, and/or some of the blocks described may be changed, eliminated, or combined.[0083] As mentioned above, the example processes of FIGS. 9-1 1 may be implemented using coded instructions (e.g., computer and/or machine readable instructions) stored on a tangible computer readable storage medium such as a hard disk drive, a flash memory, a read-only memory (ROM), a compact disk (CD), a digital versatile disk (DVD), a cache, a random-access memory (RAM) and/or any other storage device or storage disk in which information is stored for any duration (e.g., for extended time periods, permanently, for brief instances, for temporarily buffering, and/or for caching of the information). As used herein, the term tangible computer readable storage medium is expressly defined to include any type of computer readable storage device and/or storage disk and to exclude propagating signals and transmission media. As used herein, "tangible computer readable storage medium" and "tangible machine readable storage medium" are used interchangeably. Additionally or alternatively, the example processes of FIGS. 9-1 1 may be implemented using coded instructions (e.g., computer and/or machine readable instructions) stored on a non-transitory computer and/or machine readable medium such as a hard disk drive, a flash memory, a read-only memory, a compact disk, a digital versatile disk, a cache, a random-access memory and/or any other storage device or storage disk in which information is stored for any duration (e.g., for extended time periods, permanently, for brief instances, for temporarily buffering, and/or for caching of the information). As used herein, the term non-transitory computer readable medium is expressly defined to include any type of computer readable storage device and/or storage disk and to exclude propagating signals and transmission media.[0084] FIG. 9 is a flowchart of example computer readable instructions for the example parser 204 to parse object notation data (e.g., an xJSON file). The example computer readable instructions of FIG. 9 begin when the example data handler 402 receives object notation data (block 902). For example the data handler 402 or another component of a device including the xJSON handler 110 may request data from another device that transmits data as xJSON data.[0085] In some examples, the data handler 402 of the illustrated example requests only a portion of an available object notation data. For example, an example xJSON file might include 100,000 key value pairs, which would exhaust the memory of a low power device (e.g., and IoT device) attempting to parse the xJSON file. Accordingly, the example data handler 402 requests a desired portion (e.g., based on a request for retrieving data). For example, the data handler 402 may reference a particular portion of the object notation data using dot notation (e.g., "@exmaple.com/myobj.xjson#id.valuel" would retrieve the key identified as valuel in the object id in the myobj.xjson file served by example.com). Thus, the example data handler 402 may retrieve a desired key(s) and/or object(s) of interest without the need to retrieve the entire object notation data. In an example implementation, an object may be referenced as "#object" where "object" is the name of the object, "@uri" where "uri" is the location from which the object notation data may be retrieved, and @uri#object.subobject where "subobject" identifies an object and/or key within the object "object" in the object notation data location at "uri."[0086] The example data handler 402 selects the first key-value pair in the object notation data (block 904). The example string processor 404 determines if the key-value pair includes string literals (e.g., quotation marks) (block 906). When the key-value pair does not include string literals, the example string processor 404 determines that the received file is of the xJSON type and stores an indication that the file is an xJSON file (e.g., because JSON files include the quotation marks but xJSON files not need to include the quotation marks) (block 908). Control then proceeds to block 922.[0087] When the string processor 404 determines that the key-value pair includes string literals, the string processor 404 stores the type as JSON (block 910). For example, the file may be a JSON compatible file because it includes the string literals, but the file may include xJSON extensions. The example string processor 404 then removes the quotation marks from the key -value pair to reduce the size of the xJSON file (block 912).[0088] After the string processor 404 sets the type as xJSON (block 908) or after the string processor 404 removes the quotation marks (block 912), the example deserialization processor 406 determines if the key includes a serialization identifier (block 914). When the key includes a serialization identifier, the example deserialization processor 406 deserializes/demarshalls the serialized data (block 916).[0089] When the key-value pair does not include a serialization identifier (block 914) or after deserialization of the key -value pair (block 916), the example decompression handler 408 determines if the key includes a compression identifier (block 918). When the key includes a compression identifier, the example decompression handler 408 decompresses the key-value pair (block 920). Example computer readable instructions that may be executed to decompress a key-value pair is described in conjunction with FIG. 1 1.[0090] When the key-value pair does not include a compression identifier identifier (block 918) or after decompression of the key -value pair (block 920), the example decryption handler 410 determines if the key of the key -value pair includes an encryption identifier (block 922). When the key includes the encryption identifier, the decryption handler 410 decrypts the key-value pair (block 924). Example computer readable instructions to decrypt the key-value pair are described in conjunction with FIG. 10.[0091] FIG. 10 is a flowchart of example computer readable instructions to decrypt and encrypted key-value pair. The example computer readable instructions may be used for implementing block 924 of FIG. 9. The example computer readable instructions begin when the decryption handler 410 determines the cipher and key used during encryption of the key -value pair (block 1002). The example decryption handler 410 determines the cipher and public key from the keys metadata included in object notation data. In some examples, the decryption handler 410 selects the cipher and key from a list of keys using an index identified in the encryption handler. [0092] The example decryption handler 410 then obtains the private key corresponding to the public key used during encryption (block 1004). For example, the private key may be stored in a set of private keys stored in the parser 204. Alternatively, the decryption handler 410 may display a prompt requesting that a user provide a private key corresponding to an identified public key. The decryption handler 410 then decrypts the encrypted data using the private key and the identified cipher (block 1006). The example computer readable instructions of FIG. 10 then end. For example, control may return to block 926 of FIG. 9.[0093] FIG. 1 1 is a flowchart of example computer readable instructions to decompress a key-value pair. The example computer readable instructions of FIG. 1 1 may be used for implementing block 920 of FIG. 9. The example computer readable instructions of FIG. 11 begin when the example decompression handler 408 determines a compression algorithm that was used for compressing the key-value pair (block 1 102). For example, the decompression handler 408 determines the compression algorithm from the metadata inserted in value of the compressed key-value pair. The example decompression handler 408 then determines parameters for the compression (block 1 104). For example, the decompression handler 408 may extract the parameters from metadata inserted in the value of the key-value pair. For example, the parameters may include a look-up table used by the compression algorithm. The example decompression handler 408 then decrypts the key-value pair using the identified compression algorithm and the parameters (block 1106). The example computer readable instructions of FIG. 11 then end. For example, control may return to block 922 of FIG. 9.[0094] FIG. 12 is a block diagram of an example processor platform 1200 structured to execute the instructions of FIGS. 5, 6, 7, 8, 9, 10, and/or 11 to implement the example first device 106 and/or the example web service 102 including the example interface 202, the example parser 204 (e.g., including the example data handler 402, the example string processor 404, the example deserialization processor 406, the example decompression handler 408, and/or the example decryption handler 410), the example generator 206 (e.g., including the example data handler 302, the example string processor 304, the example hashing and encryption handler 306, the example compression handler 308, and/or the example serialization processor 310), and/or the example JavaScript interpreter 210. The processor platform 1200 can be, for example, a personal computer, a mobile device (e.g., a cell phone, a smart phone, a tablet such as an iPad™), a personal digital assistant (PDA), or any other type of computing device capable of processing images.[0095] The processor platform 1200 of the illustrated example includes a processor 1212. The processor 1212 of the illustrated example is hardware. For example, the processor 1212 can be implemented by one or more integrated circuits, logic circuits, microprocessors or controllers from any desired family or manufacturer. The example processor 1212 of FIG. 12 may implement the components of the example xJSON handler 110 including the example parser 204, the example generator 206, and the example JavaScript interpreter 208 to parse and generate xJSON files and data.[0096] The processor 1212 of the illustrated example includes a local memory 1213 (e.g., a cache). The processor 1212 of the illustrated example is in communication with a main memory including a volatile memory 1214 and a non-volatile memory 1216 via a bus 1218. The volatile memory 1214 may be implemented by Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS Dynamic Random Access Memory (RDRAM) and/or any other type of random access memory device. The non-volatile memory 1216 may be implemented by flash memory and/or any other desired type of memory device. Access to the main memory 1214, 1216 is controlled by a memory controller.[0097] The processor platform 1200 of the illustrated example also includes an interface circuit 1220. The interface circuit 1220 may be implemented by any type of interface standard, such as an Ethernet interface, a universal serial bus (USB), and/or a PCI express interface. The example interface circuit may implement the example interface 202 of the xJSON handler 110 of FIG. 1 and/or 2 to interface the processor platform 1200 with the example network 104 of FIG. 1.[0098] In the illustrated example, one or more input devices 1222 are connected to the interface circuit 1220. The input device(s) 1222 permit(s) a user to enter data and commands into the processor 1212. The input device(s) can be implemented by, for example, an audio sensor, a microphone, a camera (still or video), a keyboard, a button, a mouse, a touchscreen, a track-pad, a trackball, isopoint and/or a voice recognition system.[0099] One or more output devices 1224 are also connected to the interface circuit 1220 of the illustrated example. The output devices 1224 can be implemented, for example, by display devices (e.g., a light emitting diode (LED), an organic light emitting diode (OLED), a liquid crystal display, a cathode ray tube display (CRT), a touchscreen, a tactile output device, a light emitting diode (LED), a printer and/or speakers). The interface circuit 1220 of the illustrated example, thus, typically includes a graphics driver card, a graphics driver chip or a graphics driver processor.[00100] The interface circuit 1220 of the illustrated example also includes a communication device such as a transmitter, a receiver, a transceiver, a modem and/or network interface card to facilitate exchange of data with external machines (e.g., computing devices of any kind) via a network 1226 (e.g., an Ethernet connection, a digital subscriber line (DSL), a telephone line, coaxial cable, a cellular telephone system).[00101] The processor platform 1200 of the illustrated example also includes one or more mass storage devices 1228 for storing software and/or data. Examples of such mass storage devices 1228 include floppy disk drives, hard drive disks, compact disk drives, Blu-ray disk drives, RAID systems, and digital versatile disk (DVD) drives.[00102] The coded instructions 1232 of FIGS. 5, 6, 7, 8, 9, 10 and/or 1 1 may be stored in the mass storage device 1228, in the volatile memory 1214, in the non- volatile memory 1216, and/or on a removable tangible computer readable storage medium such as a CD or DVD.[00103] Examples disclosed herein provide extensions to object notation data (e.g., human-readable object notation data such as JSON). In some examples, usage of data storage and communication bandwidth are reduced used by packing and/or compressing portions of the object notation data. In some examples, computer processing resource usage is reduced by allowing portions of object notation data to be packed/compressed, serialized, and/or compressed while allowing other portions of the object notation data not to be extended. For example, in a JSON file, using examples disclosed herein, a single key-value pair can be encrypted without requiring the entire JSON file to be encrypted, which reduces the amount of processing required to encrypt and decrypt the elements of the JSON file. In some examples, backward compatibility with devices that do not support the extension is provided by generating output (e.g., extended JSON files) that follow grammar rules set by prior object notation protocols. Accordingly, such extended files that meet the grammar rules of the prior protocol will not trigger errors when parsing the extended files with a device that does not support the extensions.[00104] Modifications are possible in the described embodiments, and other embodiments are possible, within the scope of the claims. |
Stacked microelectronic devices and methods for manufacturing such devices are disclosed herein. In one embodiment, a stacked microelectronic device assembly can include a first known good packaged microelectronic device including a first interposer substrate. A first die and a first through-casing interconnects are electrically coupled to the first interposer substrate. A first casing at least partially encapsulates the first device such that a portion of each first interconnect is accessible at a top portion of the first casing. A second known good packaged microelectronic device is coupled to the first device in a stacked configuration. The second device can include a second interposer substrate having a plurality of second interposer pads and a second die electrically coupled to the second interposer substrate. The exposed portions of the first interconnects are electrically coupled to corresponding second interposer pads. |
CLAIMS I/We claim: 1. A stacked microelectronic device assembly, comprising: a first known good packaged microelectronic device including (a) a first interposer substrate with a plurality of first interposer contacts, (b) a first die carried by and electrically coupled to the first interposer contacts, (c) a first casing having a first face at the first interposer substrate and a second face opposite the first face such that the first casing encapsulates the first die and at least a portion of the first interposer substrate, and (d) a plurality of first through-casing interconnects at least partially encapsulated in the first casing and in contact with corresponding first interposer contacts, wherein the first interconnects extend from the first face to the second face; and a second known good packaged microelectronic device coupled to the first device in a stacked configuration, the second device including (a) a second interposer substrate with a plurality of second interposer pads, (b) a second die carried by and electrically coupled to the second interposer substrate, and (c) a second casing that encapsulates the second die and at least a portion of the second interposer substrate, wherein the second interposer pads are electrically coupled to the exposed portions of the corresponding first interconnects at the second face of the first casing. 2. The assembly of claim 1 wherein: the first interposer substrate includes a first side and a second side opposite the first side, the first interposer contacts being arranged in a desired pattern at the first side; the first interconnects comprise a plurality of first conductive lead fingers attached to the first side of the first interposer substrate and projecting inwardly from a periphery of the first casing toward the first die, the first lead fingers being electrically coupled to corresponding first interposer contacts; the second interposer substrate includes a first side, a second side opposite the first side, a plurality of the second interposer contacts arranged in a desired pattern at the first side, and the second interposer pads arranged in a desired pattern at the second side; and the assembly further comprises a plurality of second conductive lead fingers attached to the first side of the second interposer substrate and projecting inwardly from a periphery of the second casing toward the second die, the second lead fingers being electrically coupled to corresponding second interposer contacts. 3. The assembly of claim 2 wherein the individual first and second lead fingers each include: a front portion facing toward the first and second die, respectively; and a back portion opposite the front portion and generally aligned with the periphery of the first and second casing, respectively, such that at least a portion of each of the first and second lead fingers is accessible at the periphery of the first casing and the second casing, respectively. 4. The assembly of claim 2, further comprising a plurality of electrical connectors coupling the exposed portion of each of the first lead fingers at the second face of the first casing to corresponding second interposer pads at the second side of the second interposer substrate. 5. The assembly of claim 1 wherein: the first interposer substrate includes a first side and a second side opposite the first side, the first interposer contacts being arranged in a desired pattern at the first side; the first interconnects comprise a plurality of first filaments attached to and projecting away from corresponding first interposer contacts; the second interposer substrate includes a first side, a second side opposite the first side, a plurality of the second interposer contacts arranged in a desired pattern at the first side, and the second interposer pads arranged in a desired pattern at the second side; and the assembly further comprises a plurality of second filaments attached to and projecting away from corresponding second interposer contacts. 6. The assembly of claim 5 wherein: the first wire-bond lines include a plurality of first free-standing wire-bond lines attached to corresponding first interposer contacts; and the second wire-bond lines include a plurality of second free-standing wire-bond lines attached to corresponding second interposer contacts. 7. The assembly of claim 5, further comprising a plurality of electrical couplers attached to a distal portion of each of the first and second filaments, and wherein the electrical couplers on the first filaments are electrically coupled to corresponding second interposer pads. 8. The assembly of claim 5 wherein: the first filaments include a plurality of first wire loops attached to corresponding first interposer contacts; and the second filaments include a plurality of second wire loops attached to corresponding second interposer contacts. 9. The assembly of claim 1 wherein: the first interposer substrate includes a first side and a second side opposite the first side, the first interposer contacts being arranged in a desired pattern at the first side; the first interconnects comprise a plurality of first openings through the first casing and at least partially filled with a conductive material, the first openings being generally aligned with corresponding first interposer contacts; the second interposer substrate includes a first side, a second side opposite the first side, a plurality of the second interposer contacts arranged in a desired pattern at the first side, and the second interposer pads arranged in a desired pattern at the second side; and the assembly further comprises a plurality of second openings through the second casing at least partially filled with the conductive material, the second openings being generally aligned with corresponding second interposer contacts. 10. The assembly of claim 9 wherein the conductive material includes a solder material deposited into the first and second openings using a reflow process. 11. The assembly of claim 9 wherein at least a portion of each first and second interconnect is aligned with a periphery of the first and second casing, respectively, such that the first and second interconnects are accessible along the periphery of the first casing and the second casing, respectively. 12. The assembly of claim 9 wherein the first and second interconnects are inboard of a periphery of the first casing and the second casing, respectively, such that the first and second interconnects are not accessible along the periphery of the first casing and the second casing, respectively. 13. The assembly of claim 9, further comprising a plurality of electrical connectors coupling the exposed portion of each of the first interconnects at the second face of the first casing to corresponding second interposer pads at the second side of the second interposer substrate. 14. The assembly of claim 1 wherein: the first interposer substrate includes a first side and a second side opposite the first side, the first interposer contacts being arranged in a desired pattern at the first side; the first die is electrically coupled to corresponding first interposer contacts with a plurality of first wire-bonds; the second interposer substrate includes a first side, a second side opposite the first side, a plurality of the second interposer contacts arranged in a desired pattern at the first side, and the second interposer pads arranged in a desired pattern at the second side; and the second die is electrically coupled to corresponding second interposer contacts with a plurality of second wire-bonds. 15. The assembly of claim 1 wherein: the first interposer substrate includes a first side and a second side opposite the first side, the first interposer contacts being arranged in a desired pattern at the first side; the first die includes an active side adjacent to the first side of the first interposer substrate, a back side, a plurality of first terminals at the active side, and integrated circuitry electrically coupled to the first terminals, and wherein the first terminals are electrically coupled to corresponding first interposer contacts; the second interposer substrate includes a first side, a second side opposite the first side, a plurality of the second interposer contacts arranged in a desired pattern at the first side, and the second interposer pads arranged in a desired pattern at the second side; and the second die includes an active side adjacent to the first side of the second interposer substrate, a back side, a plurality of second terminals at the active side, and integrated circuitry electrically coupled to the second terminals, and wherein the second terminals are electrically coupled to corresponding second interposer contacts. 16. The assembly of claim 1 wherein the first interposer substrate includes a first side, a second side opposite the first side, the first interposer contacts arranged in a desired pattern at the first side, and a plurality of first interposer pads at the second side arranged in a pattern corresponding to a standard JEDEC pinout. 17. The assembly of claim 16, further comprising a plurality of electrical couplers attached to corresponding first interposer pads. 18. The assembly of claim 1 , further comprising an underfill material between the first and second devices. 19. The assembly of claim 1 wherein: the second interposer substrate includes a first side, a second side opposite the first side, a plurality of the second interposer contacts arranged in a desired pattern at the first side, and the second interposer pads arranged in a desired pattern at the second side; the second casing has a first face at the first side of the second interposer substrate and a second face opposite the first face; the second device further comprises a plurality of second through-casing interconnects at least partially encapsulated in the second casing and in contact with corresponding second interposer contacts, the second interconnects extending from the first face of the second casing to the second face of the second casing; and the assembly further comprises a third known good packaged microelectronic device coupled to the second device in a stacked configuration, the third device including (a) a third interposer substrate with a plurality of third interposer pads, (b) a third die carried by and electrically coupled to the third interposer substrate, and (c) a third casing that encapsulates the third die and at least a portion of the third interposer substrate, wherein the third interposer pads are electrically coupled to the exposed portions of the corresponding second interconnects at the second face of the second casing. 20. A set of stacked microelectronic devices, comprising: a first known good packaged microelectronic device including - a first interposer substrate having a first side, a second side opposite the first side, a plurality of first interposer contacts at the first side, and a plurality of first interposer pads at the second side arranged in an array corresponding to a standard JEDEC pinout; a first microelectronic die attached to the first side of the interposer substrate and electrically coupled to the first interposer contacts; a plurality of first interconnects electrically coupled to and in contact with corresponding first interposer contacts; and a first casing that encapsulates the first die, at least a portion of the first interposer substrate, and at least a portion of the first interconnects, wherein the first casing has a first thickness and each of the first interconnects has a thickness equal to or greater than the first thickness such that at least a portion of each first interconnect is accessible at a top surface of the first casing; and a second known good packaged microelectronic device coupled to the first device in a stacked configuration, the second device including - a second interposer substrate having a first side, a second side opposite the first side and facing the first microelectronic device, a plurality of second interposer contacts at the first side, and a plurality of second interposer pads arranged in an array at the second side, wherein the first interconnects are directly electrically coupled to corresponding second interposer pads; a second microelectronic die carried by the first side of the second interposer substrate and electrically coupled to corresponding second interposer contacts; a plurality of second interconnects electrically coupled to and in contact with corresponding second interposer contacts; and a second casing that encapsulates the second die, at least a portion of the second interposer substrate, and at least a portion of the second interconnects. 21. The stacked microelectronic devices of claim 20 wherein: the first interconnects comprise a plurality of first conductive lead fingers attached to the first side of the first interposer substrate and projecting inwardly from a periphery of the first casing toward the first die, the first lead fingers being in contact with and electrically coupled to corresponding first interposer contacts; and the second interconnects comprise a plurality of second conductive lead fingers attached to the first side of the second interposer substrate and projecting inwardly from a periphery of the second casing toward the second die, the second lead fingers being in contact with and electrically coupled to corresponding second interposer contacts. 22. The stacked microelectronic devices of claim 21 wherein the first and second lead fingers each include: a front portion facing toward the corresponding first or second die; and a back portion opposite the front portion and generally aligned with the periphery of the corresponding first or second casing such that at least a portion of the lead finger is accessible at the periphery of each casing. 23. The stacked microelectronic devices of claim 21 , further comprising a plurality of electrical connectors coupling the exposed portion of each of the first lead fingers at the top surface of the first casing to corresponding second interposer pads at the second side of the second interposer substrate. 24. The stacked microelectronic devices of claim 20 wherein: the first interconnects comprise a plurality of first filaments attached to and projecting away from corresponding first interposer contacts; and the second interconnects comprise a plurality of second filaments attached to and projecting away from corresponding second interposer contacts. 25. The stacked microelectronic devices of claim 24 wherein: the first filaments include a plurality of first free-standing wire-bond lines attached to corresponding first interposer contacts; and the second flexible interconnects include a plurality of second freestanding wire-bond lines attached to corresponding second interposer contacts. 26. The stacked microelectronic devices of claim 24, further comprising a plurality of electrical couplers attached to a distal portion of each of the first and second filaments, and wherein the electrical couplers on the first filaments are electrically coupled to corresponding second interposer pads. 27. The stacked microelectronic devices of claim 24 wherein: the first interconnects include a plurality of first wire loops attached to corresponding first interposer contacts; and the second interconnects include a plurality of second wire loops attached to corresponding second interposer contacts. 28. The stacked microelectronic devices of claim 20 wherein: the first interconnects comprise a plurality of first openings extending through the first casing to corresponding first interposer contacts and a first conductive material deposited into at least a portion of each first opening; and the second interconnects comprise a plurality of second openings extending through the second casing to corresponding second interposer contacts and a second conductive material deposited into at least a portion of each second opening. 29. The stacked microelectronic devices of claim 28 wherein the first and second conductive materials include a solder material deposited into the first and second openings, respectively, using a reflow process. 30. The stacked microelectronic devices of claim 28 wherein at least a portion of each first and second interconnect is aligned with a periphery of the first and second casing, respectively, such that the first and second interconnects are accessible along the periphery of the first casing and the second casing, respectively. 31. The stacked microelectronic devices of claim 28 wherein the first and second interconnects are inboard of a periphery of the first casing and the second casing, respectively, such that the first and second interconnects are not accessible along the periphery of the first casing and the second casing, respectively. 32. The stacked microelectronic devices of claim 28, further comprising a plurality of electrical connectors coupling the exposed portion of each of the first interconnects at the top surface of the first casing to corresponding second interposer pads at the second side of the second interposer substrate. 33. The stacked microelectronic devices of claim 20 wherein: the first die includes an active side, a back side adjacent to the first side of the first interposer substrate, a plurality of first terminals at the active side, and integrated circuitry electrically coupled to the first terminals, and wherein the first terminals are electrically coupled to corresponding first interposer contacts with a plurality of first wire-bonds; and the second die includes an active side, a back side adjacent to the first side of the second interposer substrate, a plurality of second terminals at the active side, and integrated circuitry electrically coupled to the second terminals, and wherein the second terminals are electrically coupled to corresponding second interposer contacts with a plurality of second wire-bonds. 34. The stacked microelectronic devices of claim 20 wherein: the first die includes an active side adjacent to the first side of the first interposer substrate, a back side, a plurality of first terminals at the active side, and integrated circuitry electrically coupled to the first terminals, and wherein the first terminals are electrically coupled to corresponding first interposer contacts; and the second die includes an active side adjacent to the first side of the second interposer substrate, a back side, a plurality of second terminals at the active side, and integrated circuitry electrically coupled to the second terminals, and wherein the second terminals are electrically coupled to corresponding second interposer contacts. 35. The stacked microelectronic devices of claim 20, further comprising a plurality of electrical couplers attached to corresponding first interposer pads. 36. The stacked microelectronic devices of claim 20, further comprising an underfill material between the first and second devices. 37. The stacked microelectronic devices of claim 20 wherein the second casing has a second thickness and each of the second interconnects has a thickness equal to or greater than the second thickness such that at least a portion of each second interconnect is accessible at a top surface of the second casing, and wherein the assembly further comprises: a third microelectronic device coupled to the second device in a stacked configuration, the third device including - a third interposer substrate having a plurality of third interposer pads, wherein the third interposer pads are electrically coupled to the exposed portions of corresponding second interconnects at the top surface of the second casing; a third microelectronic die carried by and electrically coupled to the third interposer substrate; and a third casing that encapsulates the third die and at least a portion of the third interposer substrate. 38. A packaged microelectronic device, comprising: an interposer substrate having a first side with a plurality of interposer contacts and a second side opposite the first side, the second side including a plurality of interposer pads arranged in an array corresponding to a standard JEDEC pinout; a microelectronic die attached and electrically coupled to the interposer substrate; a casing covering the die and at least a portion of the interposer substrate, wherein the casing has a first thickness and a top facing away from the interposer substrate; and a plurality of electrically conductive through-casing interconnects in contact with and projecting from corresponding interposer contacts, wherein the through-casing interconnects extend through the thickness of the casing to a terminus at the top of the casing, and wherein the through-casing interconnects are at least partially encapsulated in the casing. 39. The microelectronic device of claim 38 wherein the through- casing interconnects comprise a plurality of conductive lead fingers attached to the first side of the interposer substrate and electrically coupled to corresponding interposer contacts, each lead finger extending toward the die and including (a) a front portion facing toward the die, and (b) a back portion opposite the front portion, the back portion being generally aligned with a periphery of the casing such that at least a portion of each lead finger is accessible along the periphery of the casing. 40. The microelectronic device of claim 38 wherein the through- casing interconnects comprise a plurality of filaments attached to and projecting away from the interposer contacts in a direction generally normal to the first side of the interposer substrate. 41. The microelectronic device of claim 40, further comprising an electrical coupler coupled to a distal portion of each filament. 42. The microelectronic device of claim 38 wherein the through- casing interconnects comprise a plurality of wire loops attached to and projecting away from the interposer contacts. 43. The microelectronic device of claim 38 wherein the through- casing interconnects comprise a plurality of openings extending through the casing to corresponding interposer contacts and a conductive material deposited into at least a portion of each opening. 44. The microelectronic device of claim 43 wherein the conductive material includes a solder material deposited into the openings using a reflow process. 45. The microelectronic device of claim 43 wherein a portion of each interconnect is aligned with a periphery of the casing such that the individual interconnects are accessible along the periphery of the casing. 46. The microelectronic device of claim 43 wherein the interconnects are inboard of a periphery of the casing such that the interconnects are not accessible along the periphery of the casing. 47. The microelectronic device of claim 38 wherein the die includes an active side, a back side adjacent to the first side of the interposer substrate, a plurality of terminals at the active side, and integrated circuitry electrically coupled to the terminals, and wherein the terminals are electrically coupled to corresponding interposer contacts with a plurality of wire-bonds. 48. The microelectronic device of claim 38 wherein the first die includes an active side adjacent to the first side of the interposer substrate, a back side, a plurality of terminals at the active side, and integrated circuitry electrically coupled to the terminals, and wherein the terminals are electrically coupled to corresponding interposer contacts. 49. The microelectronic device of claim 38 wherein the device is a known good packaged device. 50. The microelectronic device of claim 38, further comprising a plurality of electrical couplers attached to corresponding interposer pads. 51. A stacked microelectronic assembly, comprising: a first known good packaged microelectronic device including - a support member having support member circuitry, the support member circuitry including a plurality of support member contacts; a microelectronic die attached the support member and electrically coupled to corresponding support member contacts; a casing covering the die and at least a portion of the support member, wherein the casing has a first thickness and a top facing away from the support member; and a plurality of through-casing interconnects in contact with and projecting from corresponding support member contacts, wherein the through-casing interconnects extend through the thickness of the casing to a terminus at the top of the casing, and wherein the through-casing interconnects are at least partially encapsulated in the casing; and one or more second known good packaged devices attached to the first device in a stacked configuration, the individual second devices being electrically coupled to the exposed portions of corresponding interconnects at the top of the casing. 52. A method for manufacturing a stacked microelectronic device, the method comprising: positioning a first known good packaged microelectronic device proximate to a second known good packaged microelectronic device, the first device including a first interposer substrate, a first die electrically coupled to the first interposer substrate, and a plurality of electrically conductive interconnects electrically coupled to the first interposer substrate, wherein the first die, at least a portion of the first interposer substrate, and at least a portion of the first interconnects are encased in a first casing, and wherein the first interconnects have accessible terminals at a top portion of the first casing; and mounting the second device to the first device in a stacked configuration, the second device including a second interposer substrate, a second die electrically coupled to the second interposer substrate, and a second casing covering the second die and at least a portion of the second interposer substrate, wherein the terminals of the first interconnects at the top portion of the first casing are electrically coupled to corresponding interposer pads of the second interposer substrate. 53. The method of claim 52 wherein the first interposer substrate includes a first side, a second side opposite the first side, a plurality of first interposer contacts at the first side, and a plurality of first interposer pads at the second side arranged in an array corresponding to a JEDEC pinout, and wherein: positioning a first known good packaged microelectronic device proximate to a second known good packaged microelectronic device includes positioning a first device having a plurality of first interconnects in contact with and projecting away from corresponding first interposer contacts. 54. The method of claim 53 wherein positioning a first device having a plurality of first interconnects in contact with and projecting away from corresponding first interposer contacts comprises: attaching a plurality of first lead fingers to the first side of the first interposer substrate such that each lead finger is in electrical contact with corresponding first interposer contacts, the first lead fingers having a thickness greater than or equal to the thickness of the first casing such that at least a portion of each lead finger is accessible at the top portion of the first casing. 55. The method of claim 54 wherein attaching a plurality of lead fingers to the first side of the first interposer substrate includes attaching lead fingers having a front surface facing toward the first die and a back surface opposite the front surface, the back surfaces of each lead finger being generally aligned with a periphery of the first casing such that each lead finger is generally accessible along the periphery of the first casing. 56. The method of claim 53 wherein positioning a first device having a plurality of first interconnects in contact with and projecting away from corresponding first interposer contacts comprises: attaching a plurality of first filaments to corresponding first interposer contacts such that the first filaments project from corresponding first interposer contacts, the first filaments having a height greater than or equal to a thickness of the first casing such that at least a portion of each first filament is accessible at the top portion of the first casing. 57. The method of claim 56 wherein attaching a plurality of first filaments to corresponding first interposer contacts includes attaching first freestanding wire-bond lines to corresponding first interposer contacts. 58. The method of claim 56, further comprising attaching a plurality of electrical couplers to a distal end of each first filament. 59. The method of claim 56 wherein attaching a plurality of first filaments to corresponding first interposer contacts includes attaching first wire loops to corresponding first interposer contacts. 60. The method of claim 53 wherein positioning a first device having a plurality of first interconnects in contact with and projecting away from corresponding first interposer contacts comprises: forming a plurality of openings through the first casing generally aligned with the first interposer contacts; and depositing a conductive material into at least a portion of each opening and in contact with corresponding first interposer contacts to form the first interconnects. 61. The method of claim 60 wherein forming a plurality of openings through the first casing generally aligned with the first interposer contacts includes forming a plurality of openings aligned with at least a portion of a periphery of the first casing such that the first interconnects are accessible along the periphery of the first casing. 62. The method of claim 60 wherein forming a plurality of openings through the first casing generally aligned with the first interposer contacts includes forming a plurality of openings inboard of a periphery of the first casing such that the first interconnects are not accessible along the periphery of the first casing. 63. The method of claim 60 wherein depositing a conductive material into at least a portion of each opening includes depositing a solder material into each opening using a reflow process. 64. The method of claim 52 wherein the first interposer substrate includes a first side, a second side opposite the first side, a plurality of first interposer contacts at the first side, and a plurality of first interposer pads at the second side arranged in an array corresponding to a JEDEC pinout, and wherein: positioning a first known good packaged microelectronic device proximate to a second known good packaged microelectronic device includes positioning a first device having a first die with an active side, a back side adjacent the interposer substrate, integrated circuitry, and a plurality of terminals at the active side and electrically coupled to the integrated circuitry, and wherein a plurality of wire-bonds electrically couple the terminals to corresponding first interposer contacts. 65. The method of claim 52 wherein the first interposer substrate includes a first side, a second side opposite the first side, a plurality of first interposer contacts at the first side, and a plurality of first interposer pads at the second side arranged in an array corresponding to a JEDEC pinout, and wherein: positioning a first known good packaged microelectronic device proximate to a second known good packaged microelectronic device includes positioning a first device having a first die with an active side adjacent the interposer substrate, a back side, integrated circuitry, and a plurality of terminals at the active side and electrically coupled to the integrated circuitry, the terminals being electrically coupled to corresponding first interposer contacts. 66. The method of claim 52 wherein positioning a first known good packaged microelectronic device proximate to a second known good packaged microelectronic device includes positioning a first device having a first footprint and a first arrangement of first interconnects proximate to a second device having a second footprint and a second arrangement of second interconnects, the first footprint being at least generally similar to the second footprint and the first arrangement of first interconnects being at least generally similar to the second arrangement of second interconnects. 67. The method of claim 52 wherein mounting the second device to the first device includes coupling a plurality of electrical connectors between the exposed portions of the first interconnects and corresponding interposer pads of the second interposer substrate. 68. The method of claim 52 wherein mounting the second device to the first device includes directly coupling the exposed portions of the first interconnects to corresponding interposer pads of the second interposer substrate. 69. The method of claim 52, further comprising disposing an underfill material between the first device and the second device. 70. The method of claim 52 wherein the first interposer substrate includes a first side, a second side opposite the first side, a plurality of first interposer contacts at the first side, and a plurality of first interposer pads at the second side arranged in an array corresponding to a JEDEC pinout, and wherein: positioning a first known good packaged microelectronic device proximate to a second known good packaged microelectronic device includes positioning a first device having a first die attached to the first side and electrically coupled to corresponding first interposer contacts; and the method further comprises attaching a plurality of electrical couplers to corresponding first interposer pads at the second side of the first interposer substrate. 71. The method of claim 52, further comprising attaching and electrically coupling a third known good packaged microelectronic device to the second device in a stacked configuration. 72. The method of claim 52, further comprising testing the first and second microelectronic devices post-packaging and before positioning the first device proximate to a second device. 73. A method for manufacturing a packaged microelectronic device, the method comprising: attaching a microelectronic die to a first side of a support member having support member circuitry, the support member including a second side opposite the first side and a plurality of support member pads at the second side arranged in a pattern corresponding to a standard JEDEC pinout; electrically coupling the die to a plurality of support member contacts at the first side of the support member; forming a plurality of through-casing interconnects attached to and projecting from corresponding support member contacts; and encapsulating the die, at least a portion of the support member, and at least a portion of the through-casing interconnects with a casing, wherein the through-casing interconnects have accessible terminals at a top portion of the casing. 74. The method of claim 73 wherein forming a plurality of through- casing interconnects comprises attaching a plurality of lead fingers to the first side of the support member such that each lead finger is in electrical contact with corresponding support member contacts, the lead fingers having a thickness greater than or equal to the thickness of the casing such that at least a portion of each lead finger is accessible at the top portion of the casing. 75. The method of claim 74 wherein attaching a plurality of lead fingers to the first side of the first support member includes attaching lead fingers having a front surface facing toward the die and a back surface opposite the front surface, the back surfaces of each lead finger being generally aligned with a periphery of the casing such that each lead finger is generally accessible along the periphery of the casing. 76. The method of claim 73 wherein forming a plurality of through- casing interconnects comprises attaching a plurality of filaments to corresponding support member contacts such that the first filaments at electrically coupled to and project from corresponding support member contacts, the filaments having a height greater than or equal to a thickness of the casing such that at least a portion of each filament is accessible at the top portion of the casing. 77. The method of claim 76 wherein attaching a plurality of filaments to corresponding support member contacts includes attaching free-standing wire- bond lines to corresponding support member contacts. 78. The method of claim 76, further comprising attaching a plurality of electrical couplers to a distal end of each filament. 79. The method of claim 76 wherein attaching a plurality of filaments to corresponding support member contacts includes attaching wire loops to corresponding support member contacts. 80. The method of claim 73 wherein forming a plurality of through- casing interconnects comprises: forming a plurality of openings through the casing generally aligned with the support member contacts; and depositing a conductive material into at least a portion of each opening and in contact with corresponding support member contacts to form the interconnects. 81. The method of claim 80 wherein forming a plurality of openings through the casing includes forming a plurality of openings aligned with at least a portion of a periphery of the casing such that the interconnects are accessible along the periphery of the casing. 82. The method of claim 80 wherein forming a plurality of openings through the casing includes forming a plurality of openings inboard of a periphery of the casing such that the interconnects are not accessible along the periphery of the casing. 83. The method of claim 80 wherein depositing a conductive material into at least a portion of each opening includes depositing a solder material into each opening using a reflow process. 84. The method of claim 73 wherein electrically coupling the die to a plurality of support member contacts at the first side of the support member includes wire-bonding a plurality of terminals on the die to corresponding support member contacts. 85. The method of claim 73 wherein: attaching a microelectronic die to a first side of a support member having support member circuitry includes attaching an active side of the die to the first side of the support member; and electrically coupling the die to a plurality of support member contacts includes electrically coupling a plurality of terminals at the active side of the die to corresponding support member contacts. 86. The method of claim 73, further comprising attaching a plurality of electrical couplers to corresponding support member pads. 87. The method of claim 73, further comprising testing the microelectronic device post-packaging. 88. A method for manufacturing a stacked microelectronic device assembly, the method comprising: assembling a plurality of microelectronic devices, each microelectronic device assembled by: attaching a microelectronic die to a first side of a support member having support member circuitry, the support member including a second side opposite the first side and a plurality of support member pads at the second side arranged in a pattern corresponding to a standard JEDEC pinout; electrically coupling the die to a plurality of support member contacts at the first side of the support member; forming a plurality of through-casing interconnects attached to and projecting from corresponding support member contacts; and encapsulating the die, at least a portion of the support member, and at least a portion of the through-casing interconnects with a casing, wherein the through-casing interconnects have accessible terminals at a top portion of the casing; testing each assembled microelectronic device; and attaching and electrically coupling the interconnects of a first one of the known good assembled microelectronic devices to the support member pads of a second one of the known good assembled microelectronic devices such that the second microelectronic device is attached to the first microelectronic device in a stacked configuration. |
MICROELECTRONIC DEVICES, STACKED MICROELECTRONIC DEVICES, AND METHODS FOR MANUFACTURING SUCHDEVICESTECHNICAL FIELDThe present invention is related to microelectronic devices, stacked microelectronic devices, and methods for manufacturing such devices.BACKGROUNDMicroelectronic devices generally have a die (i.e., a chip) that includes integrated circuitry having a high density of very small components. In a typical process, a large number of dies are manufactured on a single wafer using many different processes that may be repeated at various stages (e.g., implanting, doping, photolithography, chemical vapor deposition, plasma vapor deposition, plating, planarizing, etching, etc.). The dies typically include an array of very small bond- pads electrically coupled to the integrated circuitry. The bond-pads are the external electrical contacts on the die through which the supply voltage, signals, etc., are transmitted to and from the integrated circuitry. The dies are then separated from one another (i.e., singulated) by dicing the wafer and backgrinding the individual dies. After the dies have been singulated, they are typically "packaged" to couple the bond-pads to a larger array of electrical terminals that can be more easily coupled to the various power supply lines, signal lines, and ground lines.An individual die can be packaged by electrically coupling the bond-pads on the die to arrays of pins, ball-pads, or other types of electrical terminals, and then encapsulating the die to protect it from environmental factors (e.g., moisture, particulates, static electricity, and physical impact). In one application, the bond- pads are electrically connected to contacts on an interposer substrate that has an array of ball-pads. Figure 1A schematically illustrates a conventional packaged microelectronic device 10 including an interposer substrate 20 and a microelectronic die 40 attached to the interposer substrate 20. The microelectronic die 40 has been encapsulated with a casing 30 to protect the die 40 from environmental factors. [0004] Electronic products require packaged microelectronic devices to have an extremely high density of components in a very limited space. For example, the space available for memory devices, processors, displays, and other microelectronic components is quite limited in cell phones, PDAs, portable computers, and many other products. As such, there is a strong drive to reduce the surface area or "footprint" of the microelectronic device 10 on a printed circuit board. Reducing the size of the microelectronic device 10 is difficult because high performance microelectronic devices 10 generally have more bond-pads, which result in larger ball-grid arrays and thus larger footprints. One technique used to increase the density of microelectronic devices 10 within a given footprint is to stack one microelectronic device 10 on top of another.Figure 1 B schematically illustrates a first packaged microelectronic device 10a attached to a second similar microelectronic device 10b in a stacked configuration. The interposer substrate 20 of the first microelectronic device 10a is coupled to the interposer substrate 20 of the second microelectronic device 10b by large solder balls 50. One drawback of the stacked devices 10a-b is that the large solder balls 50 required to span the distance between the two interposer substrates 20 use valuable space on the interposer substrates 20, which increases the footprint of the microelectronic devices 10a-b.Figure 2 schematically illustrates another packaged microelectronic device 60 in accordance with the prior art. The device 60 includes a first microelectronic die 70a attached to a substrate 80 and a second microelectronic die 70b attached to the first die 70a. The first and second dies 70a-b are electrically coupled to the substrate 80 with a plurality of wire-bonds 90, and the device 60 further includes a casing 95 encapsulating the dies 70a-b and wire-bonds 90. One drawback of the packaged microelectronic device 60 illustrated in Figure 2 is that if one of the dies 70a-b fails a post-encapsulation quality control test then the packaged device 60, including the good die 70, is typically discarded. Similarly, if one of the dies 70a-b becomes inoperable and/or is damaged after packaging, the entire packaged device 60 (rather than just the bad die) is generally discarded. Accordingly, there is a need to provide stacked microelectronic device packages that have small footprints and good dies. BRIEF DESCRIPTION OF THE DRAWINGSFigure 1A is a partially schematic side cross-sectional view of a conventional packaged microelectronic device in accordance with the prior art.Figure 1 B is a partially schematic side cross-sectional view of the packaged microelectronic device of Figure 1A stacked on top of a second similar microelectronic device.Figure 2 is a partially schematic side cross-sectional view of another packaged microelectronic device in accordance with the prior art.Figures 3A-7 illustrate stages of a method for manufacturing a plurality of stacked microelectronic devices in accordance with one embodiment of the invention.Figures 8-13 illustrate stages of a method for manufacturing a plurality of stacked microelectronic devices in accordance with another embodiment of the invention.Figure 14 is a partially schematic side cross-sectional view of a microelectronic device configured in accordance with still another embodiment of the invention.Figures 15A-18 illustrate stages of a method for manufacturing a plurality of stacked microelectronic devices in accordance with yet another embodiment of the invention.Figures 19 and 20 illustrate stages of a method for manufacturing a plurality of stacked microelectronic devices in accordance with still yet another embodiment of the invention.DETAILED DESCRIPTIONA. Overview/SummaryThe following disclosure describes several embodiments of microelectronic devices, stacked microelectronic devices, and methods for manufacturing such devices. One aspect of the invention is directed toward a stacked microelectronic device assembly including a first known good packaged microelectronic device and a second known good packaged microelectronic device coupled to the first device in a stacked configuration. The first device can include a first interposer substrate with a plurality of first interposer contacts and a first die carried by and electrically coupled to the first interposer contacts. The first device can also include a first casing having a first face at the first interposer substrate and a second face opposite the first face such that the first casing encapsulates the first die and at least a portion of the first interposer substrate. The first device can further include a plurality of first through-casing interconnects at least partially encapsulated in the first casing and in contact with corresponding first interposer contacts. The first interconnects extend from the first face to the second face.The second device can include a second interposer substrate with a plurality of second interposer pads and a second die carried by and electrically coupled to the second interposer substrate. The second device can also include a second casing that encapsulates the second die and at least a portion of the second interposer substrate. The second interposer pads are electrically coupled to the exposed portions of the corresponding first interconnects at the second face of the first casing.The first interconnects can have a number of different configurations. In one embodiment, for example, the first interconnects comprise a plurality of lead fingers attached to the first side of the first interposer substrate and projecting inwardly from a periphery of the first casing toward the first die. The lead fingers can be in contact with and electrically coupled to corresponding first interposer contacts. In another embodiment, the first interconnects comprise filaments attached to and projecting from the first interposer contacts. In still another embodiment, the first interconnects comprise a plurality of openings extending through the first casing and generally aligned with corresponding first interposer contacts. The individual openings can be at least partially filled with a conductive material (e.g., a solder material deposited into the openings using a reflow process). In some embodiments, the first interconnects are at least partially aligned with a periphery of the first casing such that at least a portion of each interconnect is accessible along the periphery of the first casing. In other embodiments, however, the first interconnects are inboard of the periphery of the first casing such that the first interconnects are not accessible along the periphery. In several embodiments, the second device can further include a plurality of second through-casing interconnects at least partially encapsulated in the second casing and in contact with corresponding second interposer contacts on the second interposer substrate. The second interconnects can include features generally similar to the first interconnects described above. In still further embodiments, one or more additional known good packaged microelectronic devices can be attached and electrically coupled to the second device in a stacked configuration.Another aspect of the invention is directed toward methods for manufacturing microelectronic devices. One embodiment of such a method includes positioning a first known good packaged microelectronic device proximate to a second known good packaged microelectronic device. The first device can include a first interposer substrate, a first die electrically coupled to the first interposer substrate, and a plurality of electrically conductive interconnects electrically coupled to the interposer substrate. The first die, at least a portion of the first interposer substrate, and at least a portion of the first interconnects are encased in a first casing. The first interconnects have accessible terminals at a top portion of the first casing. The method also includes mounting the second device to the first device in a stacked configuration. The second device can include a second interposer substrate and a second die electrically coupled to the second interposer substrate. A second casing covers the second die and at least a portion of the second interposer substrate. The terminals of the first interconnects at the top portion of the first casing are electrically coupled to corresponding interposer pads of the second interposer substrate.The terms "assembly" and "subassembly" are used throughout to include a variety of articles of manufacture, including, e.g., semiconductor wafers having active components, individual integrated circuit dies, packaged dies, and devices comprising two or more microfeature workpieces or components, e.g., a stacked die package. Many specific details of certain embodiments of the invention are set forth in the following description and in Figures 3A-20 to provide a thorough understanding of these embodiments. A person skilled in the art, however, will understand that the invention may be practiced without several of these details or additional details can be added to the invention. Well-known structures and functions have not been shown or described in detail to avoid unnecessarily obscuring the description of the embodiments of the invention.B. Embodiments of Methods for Manufacturing Stacked Microelectronic Devices and Microelectronic Devices Formed Using Such MethodsFigures 3A-7 illustrate stages in a method for manufacturing a plurality of stacked microelectronic devices in accordance with one embodiment of the invention. More specifically, Figure 3A is a partially schematic, top plan view of a subassembly 100 at an initial stage of the method, and Figure 3B is a side cross- sectional view taken substantially along line 3B-3B of Figure 3A. Referring to Figures 3A and 3B together, the subassembly 100 includes a support member 102, such as an interposer substrate, a printed circuit board, or another suitable structure, and a lead frame 120 on the support member. In the illustrated embodiment, the support member 102 includes (a) a first side 104 having a plurality of contacts 108, (b) a second side 106 opposite the first side 104 and having a plurality of pads 110, and (c) a plurality of traces 112 or other type of conductive lines between the contacts 108 and corresponding pads 110 or other contacts (not shown) at the second side 106 of the support member 102. The contacts 108 can be arranged in arrays for electrical connection to corresponding contacts on the lead frame 120 and/or one or more microelectronic dies attached to the support member 102, as described in more detail below. In one aspect of this embodiment, the pads 110 at the second side 106 of the support member 102 are arranged in an array corresponding to a standard JEDEC pinout. In other embodiments, the support member 102 may include a different number or arrangement of contacts/pads at the first side 104 and/or second side 106.The lead frame 120 is a self-supporting structure that generally includes a peripheral dam 122 and a plurality of lead fingers 124 projecting inwardly of the peripheral dam 122. The lead fingers 124 are spaced from one another by gaps 126 therebetween. The inner surfaces of the peripheral dam 122 and each of the lead fingers 124 together form an inner periphery 128 of an opening 129 in the lead frame 120. In this example, the opening 129 extends through the entire thickness of the lead frame 120. The lead frame 120 can be formed of a metal or another suitable conductive material. In some embodiments, the lead frame 120 can be a conductive material that is plated with a noble metal, such as gold, silver, or palladium, or the lead frame 120 can be a non-conductive material coated with a conductive material. A portion of each lead finger 124 contacts a corresponding contact 108 on the support member 102.Although six lead fingers 124 are shown in the illustrated embodiment, the lead frame 120 can have a different number of lead fingers 124 based, at least in part, on the configuration of the microelectronic die that is to be electrically coupled to the lead frame 120. In still other embodiments, the lead fingers 124 can include more complex shapes instead of the fairly simple, block-shaped lead fingers 124 shown in the illustrated embodiment.In one aspect of this embodiment, the peripheral dam 122 and each of the lead fingers 124 have generally the same height D-i. As described in more detail below, the height Di should be greater than the height of a microelectronic die to be positioned on the support member 102. In other embodiments, however, the height of the lead fingers 124 may be different than the height of the peripheral dam 122.Referring next to Figures 4A and 4B, a microelectronic die 140 may be positioned within the opening 129 of the lead frame 120. Although only a single lead frame 120 and die 140 are shown attached to the support member 102, a plurality of lead frames 120 and dies 140 can be attached to the support member 102 for manufacturing a plurality of microelectronic devices. The die 140 can include a front or active side 142, a back side 144 opposite the active side 142, and integrated circuitry 146 (shown schematically). The back side 144 of the die 140 is attached to the exposed first side 104 of the support member 102 within the opening 129. The die 140 can also include a plurality of terminals 148 (e.g., bond-pads) arranged in an array at the active side 142 and electrically coupled to the integrated circuitry 146. The terminals 148 accordingly provide external contacts to provide source voltages, ground voltages, and signals to the integrated circuitry 146 in the die 140. In the illustrated embodiment, the terminals 148 are adjacent to the periphery of the die 140 and electrically coupled to corresponding contacts 108 on the support member 102 by wire-bonds 150 or other types of connectors. The wire-bonds 150 generally include a loop height that remains below the height Di of the lead frame 120 to ensure complete encapsulation of the wire-bonds 150 by an encapsulant, as described in more detail below. [0025] In other embodiments, the die 140 can have other features and/or the die can be attached and electrically coupled to the support member 102 using other arrangements, such as a flip-chip configuration (FCIP) or another suitable method. Furthermore, the order in which the lead frame 120 and die 140 are attached to the support member 102 can be varied. In the embodiment described above, the lead frame 120 is attached to the support member 102 before the die 140 is attached to the support member. In other embodiments, however, the die 140 can be attached to the support member 102 before the lead frame 120 is attached to the support member. In still further embodiments, the lead frame 120 and the die 140 may be simultaneously attached to the support member 102.Referring next to Figures 5A and 5B, an encapsulant 160 may be disposed in the opening 129 after the die 140 is electrically coupled to the contacts 108 to form a casing 162 that encapsulates at least a portion of the subassembly 100. More particularly, the exposed first side 104 of the support member 102, the inner periphery 128 of the lead frame 120, and the die 140 define a cavity 152 within the opening 129 that may be partially or completely filled with the encapsulant 160 to form the casing 162. In the illustrated embodiment, the cavity 152 is completely filled with the encapsulant 160 such that an upper portion 164 of the casing 162 is substantially coplanar with an upper portion 130 of the lead fingers 124. In other embodiments, however, the upper portion 164 of the casing 162 may be below the upper portion 130 of the lead fingers 124 as long as the die 140 and corresponding wire-bonds 150 are completely encapsulated.The encapsulant 160 can be deposited into the opening 129 using a suitable application process, such as conventional injection molding, film molding, or other suitable process. In several embodiments, the encapsulant 160 is delivered to the cavity 152 and is allowed to simply fill the cavity and cover the die 140 and wire- bonds 150. If any encapsulant 160 flows outwardly over the upper portion 130 of the lead fingers 124, the overburden of encapsulant material can be removed by grinding, polishing, or other suitable techniques. In other embodiments, however, the flow of encapsulant 160 can be limited by use of a molding element (not shown) having a substantially flat molding surface that lies substantially flush against the upper portion 130 of the lead fingers 124 to keep the encapsulant 160 from flowing over the lead frame 120. [0028] As best seen in Figure 5A, the peripheral dam 122 physically connects each of the lead fingers 124 to each other and helps define the cavity 152 for receiving the encapsulant 160 as described above. Once the casing 162 is in place, however, the peripheral dam 122 is no longer needed. Accordingly, the subassembly 100 can be cut along lines A-A to remove the peripheral dam 122 and form a packaged microelectronic device 170 (Figures 6A and 6B) having a plurality of isolated lead fingers 124 spaced about a periphery of the device 170. The subassembly 100 can be cut using a conventional wafer saw, high-pressure water jets, lasers, or the like. In other embodiments, the lines A-A can be moved slightly inward toward the die 140 such that a portion of each lead finger 124 is also removed along with the peripheral dam 122.Referring next to Figures 6A and 6B, the device 170 can be tested post- packaging to ensure that the device functions properly so that only known good devices undergo further processing. Furthermore, a plurality of electrical couplers 166 (e.g., solder balls) can be attached to corresponding pads 110 at the second side 106 of the support member 102. The electrical couplers 166 are generally attached to the device 170 after testing to ensure that the couplers are only attached to known good devices, but in some embodiments the couplers can be attached to the device before testing.One feature of the device 170 is that the upper portion 164 of the casing 162 is substantially coplanar with the upper portion 130 of the lead fingers 124. The device 170 is accordingly a mechanically stable structure wherein each of the lead fingers 124 defines an electrical pathway between the pads 110 at the second side 106 of the support member 102 and the upper portion 130 of corresponding lead fingers 124. As explained below, this feature can facilitate stacking of two or more devices 170. Another feature of the device 170 is that at least a portion of each lead finger 124 is accessible along a periphery of the casing 162. More specifically, each lead finger 124 includes a front surface 132 facing toward the die 140 and a back surface 134 opposite the front surface 132 and generally aligned with the periphery of the casing 162. One advantage of this feature is that the accessible back surface 134 of each lead finger 124 can provide additional contact points to facilitate testing of the device 170. [0031] Figure 7 is a side cross-sectional view of a stacked microelectronic device assembly 190 including an upper microelectronic device 170a stacked on top of a lower microelectronic device 170b. The upper and lower devices 170a and 170b can be generally similar to the microelectronic device 170 described above with respect to Figures 6A and 6B. The upper device 170a differs from the device 170 described above, however, in that the device 170a includes an array of pads 111a at the second side 106 of the support member 102 having a different arrangement than the array of pads 110 of the device 170. More specifically, the device 170a is configured to be an "upper" device in a stacked assembly and, accordingly, the pads 111a are arranged such that they contact corresponding lead fingers 124 of the lower device 170b to electrically couple the upper device 170a and lower device 170b together. Furthermore, the upper device 170a does not generally include electrical couplers attached to the pads 111a. In other embodiments, the upper device 170a and/or lower device 170b can have different arrangements. For example, the upper device 170a can include a plurality of pads 110a (shown in broken lines) having an arrangement generally similar to the arrangement of pads 110 of the device 170 described above. The lead fingers 124 of the lower device 170b can include engagement portions 124a (shown in broken lines) projecting from the front surface 132 of each lead finger 124 and configured to contact corresponding pads 110a. In still other embodiments, the upper and lower device 170a and 170b can include other features.The upper device 170a is coupled to the lower device 170b by attaching and electrically coupling the pads 111a of the upper device 170a to corresponding lead fingers 124 on the lower device 170b. In the illustrated embodiment, the second side 106 of the upper device's support member 102 is in direct contact with the upper portion 164 of the lower device's casing 162. Accordingly, the stacked assembly 190 does not include a fill material between the upper and lower devices 170a and 170b. As mentioned previously, however, in other embodiments the upper portion 164 of the casing 162 may not be coplanar with the upper portion 130 of the lead fingers 124 and, accordingly, a fill material (not shown) may be deposited into a gap or cavity between the upper device 170a and the lower device 170b. The fill material (e.g., an epoxy resin or other suitable molding compound) can enhance the integrity of the stacked assembly 190 and protect the components of the upper device and the lower device from moisture, chemicals, and other contaminants. The fill material, however, is an optional component.One advantage of the devices 170 formed using the methods described above with reference to Figures 3A-7 is that the devices can be stacked on top of each other. Stacking microelectronic devices increases the capacity and/or performance within a given surface area or footprint. For example, when the upper microelectronic device 170a is stacked on top of the lower microelectronic device 170b and the lower device 170b is attached to a circuit board or other external device, the upper microelectronic device 170a is electrically and operably coupled to the circuit board or external device without using any more surface area on the circuit board.One feature of the stacked assembly 190 is that both the upper and lower devices 170a and 170b can be tested after packaging and before stacking to ensure that they function properly before being assembled together. Throughput of stacked assemblies 190 can accordingly be increased because defective devices can be detected and excluded from the stacked assemblies 190 formed using the methods described above and each assembly will generally include only known good devices. This increases the yield of the packaging processes described above and reduces the number of devices that malfunction and/or include defects.Still another feature of the devices 170 described above with reference to Figures 3A-7 is that the electrical couplers 166 are positioned inboard of the lead fingers 124. An advantage of this feature is that the footprint of the stacked assembly 190 is reduced as compared with conventional stacked devices, such as the devices 10a and 10b illustrated in Figure 1 B where the solder balls 50 are outboard of the dies 40. Minimizing the footprint of microelectronic devices is particularly important in cell phones, PDAs, and other electronic products where there is a constant drive to reduce the size of microelectronic components used in such devices. C. Additional Embodiments of Methods for Manufacturing Stacked Microelectronic Devices and Microelectronic Devices Formed Using Such MethodsFigures 8-21 illustrate various stages in other embodiments of methods for manufacturing stacked microelectronic devices. The following methods and devices formed using such methods can have many of the same advantages as the devices 170 and the stacked assembly 190 described above with respect to Figures 3A-7.Figure 8, for example, is a schematic side cross-sectional view of a subassembly 200 including a plurality of microelectronic dies 220 (only three are shown) arranged in an array on a support member 202. The support member 202 can include an interposer substrate, a printed circuit board, or other suitable support member for carrying the dies 220. In the illustrated embodiment, the support member 202 includes (a) a first side 204 having a plurality of contacts 208, and (b) a second side 206 having a plurality of first pads 210 and a plurality of second pads 212. The contacts 208 can be arranged in arrays for electrical connection to corresponding terminals on the dies 220 and the first and second pads 210 and 212 can be arranged in arrays to receive a plurality of electrical couplers (e.g., solder balls) and/or other types of electrical interconnects. The support member 202 further includes a plurality of conductive traces 214 electrically coupling the contacts 208 to corresponding first and second pads 210 and 212. In one aspect of this embodiment, the first and/or second pads 210 and 212 at the second side 206 of the support member 202 are arranged in an array corresponding to a standard JEDEC pinout. In other embodiments, the support member 202 may include a different number or arrangement of contacts/pads at the first side 204 and/or the second side 206.The individual dies 220 include integrated circuitry 222 (shown schematically), a front or active side 224, a plurality of terminals 226 (e.g., bond- pads) arranged in an array at the active side 224 and electrically coupled to the integrated circuitry 222, and a back side 228 opposite the active side 224. The back sides 228 of the dies 220 are attached to the support member 202 with an adhesive 230, such as an adhesive film, epoxy, tape, paste, or other suitable material. A plurality of wire-bonds 232 or other types of connectors couple the terminals 226 on the dies 220 to corresponding contacts 208 on the support member 202. Although the illustrated dies 220 have the same structure, in other embodiments, the dies 220 may have different features to perform different functions. In further embodiments, the dies 220 may be attached and electrically coupled to the support member 202 using other arrangements, such as an FCIP configuration or another suitable method.Figure 9 is a schematic side cross-sectional view of the subassembly 200 after attaching a plurality of interconnects or filaments 234 to the contacts 208 at the first side 204 of the support member 202. The interconnects or filaments 234 can include thin, flexible wires attached and electrically coupled to corresponding contacts 208. In the illustrated embodiment, for example, the interconnects 234 include relatively straight, free-standing wire-bond lines that are attached to and project away from the contacts 208 in a direction generally normal to the first side 204 of the support member 202. The interconnects 234 include a height H relative to the first side 204 of the support member 202 that is based, at least in part, on the desired height of the resulting packaged device. The interconnects 234 can also include an electrical coupler 236 (e.g., a ball-shaped portion) at a distal end of each interconnect. As described in more detail below, the electrical couplers 236 can help improve joint interconnection with one or more devices that may be stacked on the dies 220. The interconnects 234 are generally attached to the contacts 208 after forming the wire-bonds 232, but in some embodiments the interconnects 234 and wire-bonds 232 can be formed at the same time. In other embodiments, such as the embodiment described below with respect to Figure 15, the interconnects 234 can have a different arrangement and/or include different features.Referring to Figure 10, an encapsulant 240 is deposited onto the support member 202 to form a plurality of casings 242 encapsulating the dies 220, the wire- bonds 232, and at least a portion of the interconnects 234. The encapsulant 240 can be deposited onto the support member 202 using a suitable application process, such as conventional injection molding, film molding, or other suitable process.Referring next to Figure 11 , a top portion 244 (shown in broken lines) of the casings 242 can be removed to at least partially expose the electrical couplers 236 at the distal end of the interconnects 234. The top portion 244 of the casings 242 can be removed using a laser grinding process or another suitable process. In other embodiments, a mold used during encapsulation of the subassembly 200 can include cavities or recesses corresponding to the arrangement of electrical couplers 236 such that the individual electrical couplers are not encapsulated when forming the casings 242 and, therefore, a grinding or removal process is not necessary. In still other embodiments, the encapsulant 240 can be deposited using another suitable process that leaves the electrical couplers 236 exposed after the device is removed from the mold. In still further embodiments, a laser drilling process can be used after encapsulation to isolate and expose at least a portion of the interconnects 234 and a conductive material (e.g., gold) can be deposited into the resulting vias to create a plurality of conductive pads in a desired arrangement at the top portion 244 of the casing 242. If desired, a redistribution structure can then be formed at the top portion 244 to redistribute the signals from the conductive pads to a larger array of contacts. After at least partially exposing the electrical couplers 236 of the interconnects 234, the subassembly 200 can be cut along lines B-B to singulate a plurality of individual microelectronic devices 250.Referring next to Figure 12, the individual devices 250 can be tested post-packaging to ensure that each device functions properly so that only known good devices undergo further processing. Further, a plurality of electrical couplers 252 (e.g., solder balls) can be attached to corresponding pads 212 at the second side 206 of the support member 202. The electrical couplers 252 are generally attached to the devices 250 after testing to ensure that the couplers are only attached to known good devices, but in some embodiments the couplers can be attached to the devices before testing.In several embodiments, one or more individual devices 250 can be stacked together to form stacked microelectronic device assemblies. Figure 13, for example, is a side cross-sectional view of a stacked microelectronic device assembly 290 including an upper microelectronic device 250a stacked on top of a lower microelectronic device 250b. The upper and lower devices 250a and 250b can be generally similar to the devices 250 described above with respect to Figures 8-12. The upper device 250a can be coupled to the lower device 250b by attaching the second side 206 of the upper device's support member 202 to the top portion 244 of the lower device's casing 242 with an adhesive material 260, such as an adhesive film, epoxy, tape, paste, or other suitable material. The lower device's electrical couplers 236b can be electrically coupled to corresponding first pads 210a on the upper device 250a. In the illustrated embodiment, for example, each of the electrical couplers 236b is electrically coupled to corresponding first pads 210a with electrical connectors 262. The electrical connectors 262 may also physically bond (at least in part) the upper device 250a to the lower device 250b. The electrical connectors 262 can include solder connections that are reflowed as is known in the art or other suitable connectors.In several embodiments, a fill material 264 can be deposited into the area between the upper device 250a and the lower device 250b and, if no additional devices are to be stacked on the upper device 250a, over the exposed electrical couplers 236a at the top portion 244 of the upper device 250a. The fill material 264 can enhance the integrity of the stacked assembly 290 and protect the components of the upper and lower devices 250a and 250b from moisture, chemicals, and other contaminants. In one embodiment, the fill material 264 can include a molding compound such as an epoxy resin. In other embodiments, the fill material 264 can include other suitable materials. Depositing the fill material 264 is an optional step that may not be included in some embodiments.In other embodiments, additional microelectronic devices can be stacked onto the upper microelectronic device 250a by exposing the electrical couplers 236a at the top portion 244 of the upper device 250a, stacking one or more additional devices (not shown) onto the upper device 250a, and electrically coupling the additional devices to the electrical couplers 236a. In still further embodiments, the upper and lower devices 250a and 250b can be different devices. For example, the microelectronic dies 220 in the upper and lower devices 250a and 250b can be the same or different types of dies and/or the upper and lower devices 250a and 250b can include other features.Figure 14 illustrates a microelectronic device 350 configured in accordance with another embodiment of the invention. The microelectronic device 350 is generally similar to the microelectronic devices 250 described above with reference to Figures 8-12. Accordingly, like reference numbers are used to refer to like components in Figures 8-12 and Figure 14. The device 350 differs from the device 250, however, in that the device 350 includes a interconnect 334 having a different configuration than the interconnect 234 of the device 250. More specifically, the interconnect 334 of the device 350 includes a wire loop such that a ball portion and a stitch portion of the wire are both at the corresponding contacts 208 on the support member 202. The loop-shaped interconnect can have the height H generally similar to the interconnect 234 described above such that a top portion 335 of the interconnect 334 can be exposed at the top portion 244 of the casing 242. One advantage of the loop-shaped interconnects 334 is that such interconnects are generally expected to be more durable than the single-filament interconnects 234 described previously because the loop-shaped interconnects are more securely anchored to the corresponding contacts 208 and, accordingly, are less likely to bend or disconnect from the contacts during molding. Furthermore, in several embodiments the loop-shaped interconnects 334 can provide lower inductance than the interconnects 234.Figures 15A-18 illustrate stages in yet another embodiment of a method for manufacturing a plurality of stacked microelectronic devices. Figure 15A, for example, is a partially schematic, isometric view of a subassembly 400 at an initial stage of the method. The subassembly 400 includes a plurality of microelectronic dies 430 (shown in broken lines) arranged in an array on a support member 402 and encapsulated with a casing 462. It will be appreciated that although only four dies 430 are shown attached to the support member 402 in the illustrated embodiment, a different number of dies 430 can be attached to the support member 402 for manufacturing a plurality of microelectronic devices. The subassembly 400 further includes a plurality of small openings or vias 440 (i.e., "pin holes") extending through the casing 462 to a first side 404 of the support member 402. The openings 440 are generally arranged in the "streets" or non-active areas between the individual dies 430. The openings 440 are discussed in greater detail below with reference to Figures 16A and 16B.Figure 15B is a side cross-sectional view taken substantially along lines 15B-15B of Figure 15A. Referring to Figures 15A and 15B together, the support member 402 can include an interposer substrate, a printed circuit board, or other suitable support member. In the illustrated embodiment, the support member 402 includes (a) the first side 404 having a plurality of first contacts 408 and a plurality of second contacts 409, (b) a second side 406 opposite the first side 404 and having a plurality of first pads 410 and a plurality of second pads 411 , and (c) a plurality of traces 412 or other type of conductive lines between the first and/or second contacts 408 and 409 and corresponding first and/or second pads 410 and 411 or other contacts (not shown) at the second side 406 of the support member 402. The first and second contacts 408 and 409 can be arranged in arrays for electrical connection to corresponding contacts on the dies 430 and one or more devices stacked on the packaged dies, as described in more detail below. In one aspect of this embodiment, the second pads 411 at the second side 406 of the support member 402 are arranged in an array corresponding to a standard JEDEC pinout. In other embodiments, the support member 402 may include a different number or arrangement of contacts and/or pads.The individual microelectronic dies 430 can include a front or active side 432, a back side 434 opposite the active side 432, and integrated circuitry 436 (shown schematically). The back side 434 of the dies 430 can be attached to the first side 404 of the support member 402 with an adhesive (not shown). The dies 430 can also include a plurality of terminals 438 (e.g., bond-pads) arranged in an array at the active side 432 and electrically coupled to the integrated circuitry 436. In the illustrated embodiment, the terminals 438 are arranged adjacent a periphery of the dies 430 and used to electrically couple the dies 430 to the support member 402 using a chip-on-board (COB) configuration. More specifically, a plurality of wire- bonds 439 or other types of connectors extend between the terminals 438 and corresponding second contacts 409 on the support member 402. In other embodiments, the dies 430 can have other features and/or the dies can be attached and electrically coupled to the support member 402 using other arrangements, such as an FCIP configuration, a board-on-chip (BOC) configuration, or another suitable configuration.Referring next to Figures 16A and 16B, a conductive material 442 is deposited into each of the openings 440 to form a plurality of electrically conductive interconnects 444 extending through the casing 462 to corresponding first contacts 408 on the support member 402. In one embodiment, for example, a solder ball (not shown) is placed at a top portion of each opening 440 and reflowed such that the solder generally fills the corresponding opening. In other embodiments, however, the conductive material 442 can be deposited into the openings 440 using other suitable methods. After forming the conductive interconnects 444, the subassembly 400 can be cut along lines C-C to singulate a plurality of individual microelectronic devices 450.Figure 17A, for example, is a partially schematic, isometric view of a singulated device 450, and Figure 17B is a side cross-sectional view taken substantially along lines 17B-17B of Figure 17A. Referring to Figures 17A and 17B together, the individual devices 450 can be tested at this stage of the method to ensure that each device functions properly so that only known good devices undergo further processing. The device 450 illustrated in Figures 17A and 17B is configured to be a "bottom" or "lower" device in a stacked microelectronic device and, accordingly, a plurality of electrical couplers (not shown) can be attached to corresponding second pads 411 at the second side of the support member 402. As discussed previously, the second pads 411 are arranged to have a standard JEDEC pinout. On the other hand, if the device 450 was configured to be an "upper" device (i.e., a device stacked on one or more lower devices), the second pads 411 could have a different arrangement and/or electrical couplers may not be attached to the second pads.One feature of the device 450 is that the interconnects 444 are at least partially exposed at a top portion 454 and a periphery portion 452 of the device 450. The exposed interconnects 444 accordingly define an electrical pathway between the first and second pads 410 and 411 at the second side 406 of the support member 402 and the top portion 454 of the device 450. As explained below, this feature can facilitate stacking of two or more devices 450.Figure 18, for example, is a partially schematic, isometric view of a stacked microelectronic device assembly 490 including a first microelectronic device 450a, a second upper microelectronic device 450b stacked on the first microelectronic device 450a, and a third microelectronic device 450c on the second microelectronic device 450b. The devices 450a-c can be generally similar to the devices 450 described above with respect to Figures 15A-17B. The second device 450b can be coupled to the first device 450a by attaching the first pads 410b at the second side 406 of the second device's support member 402 to corresponding exposed portions of the first device's interconnects 444 at the top portion 454 of the first device 450a. The third device 450c can be coupled to the second device 450b in a generally similar manner. [0054] In one embodiment, a plurality of extremely small alignment holes (not shown) can be formed completely through each device 450a-c before stacking the devices together. Either during or after stacking the devices 450 together, a laser beam or other suitable beam of light can be directed through the alignment holes in the stacked assembly 490 to ensure that the individual devices are properly aligned relative to each other so that the external electrical contacts on each device are in contact with appropriate contacts on the adjoining device(s). For example, if the beam passes completely through the stacked assembly, the alignment holes in each device are properly aligned. On the other hand, if the light does not pass completely through the stacked assembly, one or more of the devices are out of alignment. In other embodiments, other suitable methods can be used to align the individual devices 450 relative to each other in the stacked assembly 490.Figures 19 and 20 illustrate stages of a method for manufacturing a plurality of stacked microelectronic devices in accordance with still yet another embodiment of the invention. This method can include several steps that are at least generally similar to those described above with respect to Figures 15A-17B. Figure 19, for example, is a side cross-sectional view of a microelectronic device 550 having a number of features generally similar to the devices 450 described above with reference to Figures 15A-17B. The arrangement of the die and the configuration of the interconnects in the device 550, however, differ from the arrangement of the die 430 and the interconnects 444 in the devices 450. More specifically, the device 550 includes a die 530 having a FCIP configuration rather than the COB configuration of the die 430 in the devices 450 described above. Moreover, the device 550 includes a plurality of interconnects 544 positioned inboard of a periphery portion 554 of the device 550, in contrast with the interconnects 444 that are at least partially exposed about the periphery portion 452 of the devices 450.The die 530 of the device 550 can include an active side 532 attached to the first side 404 of the support member 402, a back side 534 opposite the active side 532, and integrated circuitry 536 (shown schematically). The die 530 can also include a plurality of terminals 538 electrically coupled to the integrated circuitry 536 and attached to corresponding first contacts 508 at the first side 404 of the support member 402. The first contacts 508 can have a different arrangement on the support member 402 than the arrangement of first contacts 408 described previously. In other embodiments, the die 530 can include different features and/or can be attached to the support member 402 using a different arrangement.The interconnects 544 extend through the casing 462 to corresponding second contacts 509 on the support member 402. The interconnects 544 can be formed using methods generally similar to those used to form the interconnects 444 described above. One particular aspect of the interconnects 544 in the illustrated embodiment is that the interconnects are arranged in laterally adjacent pairs (shown as a first interconnect 544a and a second interconnect 544b) about the die 530. One advantage of this feature is that it increases the number of signals that can be passed from the device 550 to an external device without substantially increasing the footprint of the device 550. In other embodiments, the interconnects 544 can have different arrangements about the die (e.g., single interconnects arranged inboard of the periphery of the device 550 or more than two interconnects arranged together).The device 550 also includes a plurality of first pads 510 and a plurality of second pads 511 at the second side 406 of the support member 402. The first pads 510 are arranged in an array corresponding to a standard JEDEC pinout and the second pads 511 are arranged in a pattern generally corresponding to the arrangement of the second contacts 509 at the first side 404 of the support member 402 to facilitate stacking of two more devices 550. In several embodiments, a plurality of electrical couplers 566 (e.g., solder balls) can be attached to corresponding first pads 510.Figure 20, for example, is a side cross-sectional view of a stacked microelectronic device assembly 590 including an upper microelectronic device 550a stacked on top of a lower microelectronic device 550b. The upper and lower devices 550a and 550b can be generally similar to the microelectronic device 550 described above with respect to Figure 19. The upper device 550a differs from the device 550 described above, however, in that the device 550a is configured to be an "upper" device in a stacked assembly and, accordingly, the upper device 550a generally does not include electrical couplers attached to the first pads 510a. [0060] The upper device 550a is coupled to the lower device 550b by attaching and electrically coupling the second pads 511 of the upper device 550a to corresponding interconnects 544 on the lower device 550b. In the illustrated embodiment, for example, the second side 406 of the upper device's support member 402 is in direct contact with the top portion of the lower device's casing. In other embodiments, however, a plurality of electrical couplers (not shown) may be used to couple the upper device's second pads 511 to corresponding interconnects 544 on the lower device 550b. In embodiments including electrical couplers, a filler material (not shown) may also be deposited into the resulting gap between the upper device 550a and the lower device 550b.One feature of the stacked assemblies 190/290/490/590 described above with respect to Figures 7, 13, 18, and 20, respectively, is that the individual microelectronic devices 170/250/450/550 in each assembly include through- packaging interconnects that are at least partially exposed at a top portion of each device's casing to facilitate stacking of the individual devices without requiring intermediate structures or large solder balls between the stacked devices. An advantage of this feature is that it can reduce the vertical profiles of the stacked assemblies 190/290/490/590. Devices with smaller vertical profiles are extremely desirable in cell phones, PDAs, and other electronic devices where there is a constant drive to reduce the size of microelectronic components used in such devices.From the foregoing, it will be appreciated that specific embodiments of the invention have been described herein for purposes of illustration, but that various modifications may be made without deviating from the invention. For example, one or more additional microelectronic devices may be stacked on the devices in each of the embodiments described above to form stacked devices including a greater number of stacked units. Furthermore, one or more additional microelectronic dies may be stacked on the dies in each of the microelectronic devices described above to form individual microelectronic devices having more than one die. The microelectronic devices may also include a number of other different features and/or arrangements. Aspects of the invention described in the context of particular embodiments may be combined or eliminated in other embodiments. Further, although advantages associated with certain embodiments of the invention have been described in the context of those embodiments, other embodiments may also exhibit such advantages, and not all embodiments need necessarily exhibit such advantages to fall within the scope of the invention. Accordingly, the invention is not limited except as by the appended claims. |
A system and method for executing previously created run time executables in a configurable processing element array is disclosed. In one embodiment, this system and method begins by identifying at least one subset of program code. The method may then generate at least one set of configuration memory contexts that replaces each of the at least one subsets of program code, the at least one set of configuration memory contexts emulating the at least one subset of program code. The method may then manipulate the at least one set of multiple context processing elements using the at least one set of configuration memory contexts. The method may then execute the plurality of threads of program code using the at least one set of multiple context processing elements. |
CLAIMSWhat is claimed is: 1. A method of executing software, comprising: retrieving a first kernel code segment; identifying a first configuration information required to execute said first kernel code segment; building a entry in a kernel code execution table utilizing said first kernel code segment and said first configuration information; selecting a first accelerator set configured to execute said first kernel code segment; and initiating a direct memory access transfer to said first accelerator set.2. The method of claim 1, wherein said first configuration information is stored in a register.3. The method of claim 2, wherein the register defines a context.4. The method of claim 1, further comprising identifying a first set of arguments.5. The method of claim 4, wherein said initiating said direct memory access transfer includes transferring said first set of arguments.6. The method of claim 1, further comprising identifying a first set of microcode. 7. The method of claim 6, wherein said initiating said direct memory access transfer includes transferring said first set of microcode. 8. The method of claim 1, wherein said initiating said direct memory access transfer includes transferring said first configuration information.9. The method of claim 1, further comprising retrieving a second kernel code segment.10. The method of claim 9, further comprising selecting a second accelerator set.11. The method of claim 10, wherein said first accelerator set and said second accelerator set are overlapping.12. The method of claim 10, wherein said first accelerator set and said second accelerator set are non-overlapping.13. The method of claim 12, wherein said first kernel code segment executes on said first accelerator set and said second kernel code segment executes on said second accelerator set concurrently 14. The method of claim 13, wherein a third kernel code segment executes on said first accelerator set subsequent to said first kernel code segment and concurrently with said second kernel code segment. 15. The method of claim 9, wherein initiating said direct memory access includes transferring a second configuration information to said first accelerator set while said first kernel code set executes on said first accelerator set.16. The method of claim 9, wherein initiating said direct memory access includes transferring a first set of microcode to said first accelerator set while said first kernel code set executes on said first accelerator set.17. The method of claim 9, wherein initiating said direct memory access includes transferring a first set of arguments to said first accelerator set while said first kernel code set executes on said first accelerator set.18. The method of claim 1, further comprising determining completion requirements of said first kernel code segment to determine the order of execution of said first kernel code segment.19. The method of claim 18, wherein said determining completion requirements of said first kernel code segment includes determining a variant of said first kernel code segment.20. The method of claim 18, wherein said determining completion requirements of said first kernel code segment includes determining whether to execute said first kernel code segment in said first accelerator set or in a second accelerator set. 21. An apparatus, comprising: a memory storing at least one set of configuration information, the at least one set of configuration information describing at least one set of contexts ; at least one accelerator; and a kernel processor coupled to the memory, said kernel processor controlling the processing of at least one thread of program code on said at least one accelerator by manipulating the at least one set of configuration information.22. The apparatus of claim 21, further comprising at least one main processor, configured to process overhead code.23. The apparatus of claim 21, wherein said at least one accelerator is a multiple context processing element.24. The apparatus of claim 23, wherein said multiple context processing elements are grouped into overlapping bins.25. The apparatus of claim 23, wherein said multiple context processing elements are grouped into non-overlapping bins.26. The apparatus of claim 25, wherein said kernel processor is configured to load a first kernel code segment into a first one of said non-overlapping bins and to load a second kernel code segment into a second one of said nonoverlapping bins. 27. The apparatus of claim 21, wherein said at least one accelerator is a digital signal processor.28. The apparatus of claim 27, wherein said digital signal processor has a single instruction cache.29. The apparatus of claim 27, wherein said digital signal processor has dual instruction caches.30. The apparatus of claim 27, wherein said digital signal processor has an instruction cache configured with dual-port memory, wherein a first port is coupled to a first bus and a second port is coupled to a second bus.31. An apparatus configured to execute software, comprising: means for retrieving a first kernel code segment; means for identifying a first configuration information required to execute said first kernel code segment; means for building a entry in a kernel code execution table utilizing said first kernel code segment and said first configuration information; means for selecting a first accelerator set configured to execute said first kernel code segment; and means for initiating a direct memory access transfer to said first accelerator set. 32. A machine-readable medium having stored thereon instructions for processing elements, which when executed by said processing elements perform the following: retrieving a first kernel code segment; identifying a first configuration information required to execute said first kernel code segment; building a entry in a kernel code execution table utilizing said first kernel code segment and said first configuration information; selecting a first accelerator set configured to execute said first kernel code segment; and initiating a direct memory access transfer to said first accelerator set. |
SYSTEM AND METHOD FOR EXECUTING HYBRIDIZED CODE ONA DYNAMICALLY CONFIGURABLE HARDWARE ENVIRONMENTFIELD OF THE INVENTIONThe present invention relates to the field of software run time operating systems. In particular, the present invention relates to a system and method for executing software code in a dynamically configurable hardware environment.BACKGROUND OF THE INVENTIONThe software which executes upon processors is a sequence of digital words known as machine code. This machine code is understandable by the hardware of the processors. However, programmers typically write programs in a higher-level language which is much easier for humans to comprehend. The program listings in this higher-level language are called source code. In order to convert the human-readable source code into machine-readable machine code, several special software tools are known in the art. These software tools are compilers, linkers, assemblers, and loaders. Existing compilers, linkers, and assemblers prepare source code well in advance of their being executed upon processors. These software tools expect that the hardware upon which the resulting machine code executes, including processors, will be in a predetermined and fixed configuration for the duration of the software execution. If a flexible processing methodology were invented, then the existing software tools would be inadequate to support processors and other hardware lacking a predetermined and fixed configuration. Furthermore, once the software was prepared using replacements for these software tools, the existing run time operating systems would not be sufficient to execute the resulting software in a flexible processing environment. SUMMARY OF THE INVENTIONA method and apparatus for processing a plurality of threads of program code is disclosed. In one embodiment, the method begins by retrieving a first kernel code segment. Then the method may identify a first set of configuration information required to execute the first kernel code segment. The method then may build an entry in a kernel code execution table utilizing the first kernel code segment and the first configuration information. The method may then select a first accelerator set configured to execute said first kernel code segment; and initiate a direct memory access transfer to the first accelerator set. BRIEF DESCRIPTION OF THE DRAWINGSThe features, aspects, and advantages of the present invention will become more fully apparent from the following detailed description, appended claims, and accompanying drawings in which:Figure 1 is the overall chip architecture of one embodiment. This chip architecture comprises many highly integrated components. Figure 2 is an eight bit multiple context processing element (MCPE) core of one embodiment of the present invention. Figure 3 is a data flow diagram of the MCPE of one embodiment. Figure 4 shows the major components of the MCPE control logic structure of one embodiment. Figure 5 is the finite state machine (FSM) of the MCPE configuration controller of one embodiment. Figure 6 is a data flow system diagram of the preparation of run time systems tables by the temporal automatic place and route (TAPR) of one embodiment. Figure 7A is a block diagram of exemplary MCPEs, according to one embodiment. Figure 7B is a block diagram of exemplary digital signal processors (DSP), according to one embodiment. Figure 8 is a diagram of the contents of an exemplary run time kernel (RTK), according to one embodiment. Figure 9A is a process chart showing the mapping of an exemplary single threaded process into kernel segments, according to one embodiment. Figure 9B is a process chart showing the allocation of the kernel segments of Figure 9A into multiple bins. Figure 9C is a process chart showing the allocation of the kernel segments of two processes into multiple bins. Figure 10 is an exemplary TAPR table, according to one embodiment. Figure 11 is a diagram of a first exemplary variant of a design, according to one embodiment. Figure 12 is a diagram of a second exemplary variant of a design, according to another embodiment. Figure 13 is a diagram of an exemplary logical MCPE architecture, according to one embodiment. Figure 14 is a diagram of an exemplary logical processor-based architecture, according to one embodiment. Figure 15 is a flowchart of processor functions, according to one embodiment. Figure 16 is a flowchart of the hardware accelerator behavior, according to one embodiment. Figure 17 is a flowchart for a RTK processor, according to one embodiment. Figure 18 is a table to support the operation of the RTK processor, according to one embodiment. DETAILED DESCRIPTION OF THE INVENTIONIn the following description, numerous specific details are set forth to provide a thorough understanding of the present invention. However, one having an ordinary skill in the art may be able to practice the invention without these specific details. In some instances, well-known circuits, structures, and techniques have not been shown in detail to not unnecessarily obscure the present invention. Figure 1 is the overall chip architecture of one embodiment. This chip architecture comprises many highly integrated components. While prior art chip architectures fix resources at fabrication time, specifically instruction source and distribution, the chip architecture of the present invention is flexible. This architecture uses flexible instruction distribution that allows position independent configuration and control of a number of multiple context processing elements (MCPEs) resulting in superior performance provided by theMCPEs. The flexible architecture of the present invention uses local and global control to provide selective configuration and control of each MCPE in an array; the selective configuration and control occurs concurrently with present function execution in the MCPEs. The chip of one embodiment of the present invention is composed of, but not limited to, a 10x10 array of identical eight-bit functional units, or MCPEs 102, which are connected through a reconfigurable interconnect network. TheMCPEs 102 serve as building blocks out of which a wide variety of computing structures may be created. The array size may vary between 2x2 MCPEs and 16x16 MCPEs, or even more depending upon the allowable die area and the desired performance. A perimeter network ring, or a ring of network wires and switches that surrounds the core array, provides the interconnections between the MCPEs and perimeter functional blocks. Surrounding the array are several specialized units that may perform functions that are too difficult or expensive to decompose into the array. These specialized units may be coupled to the array using selected MCPEs from the array. These specialized units can include large memory blocks called configurable memory blocks 104. In one embodiment these configurable memory blocks 104 comprise eight blocks, two per side, of 4 kilobyte memory blocks. Other specialized units include at least one configurable instruction decoder 106. Furthermore, the perimeter area holds the various interfaces that the chip of one embodiment uses to communicate with the outside world including : input/output (I/O) ports; a peripheral component interface (PCI) controller, which may be a standard 32-bit PCI interface; one or more synchronous burbot static random access memory (SRAM) controllers ; a programming controller that is the boot-up and master control block for the configuration network; a master clock input and phase-locked loop (PLL) control/configuration; a JointTest Action Group (JTAG) test access port connected to all the serial scan chains on the chip; and I/O pins that are the actual pins that connect to the outside world. Two concepts which will be used to a great extent in the following description are context and configuration. Generally,"context"refers to the definition of what hardware registers in the hardware perform which function at a given point in time. In different contexts, the hardware may perform differently. A bit or bits in the registers may define which definition is currently active. Similarly,"configuration"usually refers to the software bits that command the hardware to enter into a particular context. This set of software bits may reside in a register and define the hardware's behavior when a particular context is set. Figure 2 is an eight bit MCPE core of one embodiment of the present invention. Primarily the MCPE core comprises memory block 210 and basicALU core 220. The main memory block 210 is a 256 word by eight bit wide memory, which is arranged to be used in either single or dual port modes. In dual port mode the memory size is reduced to 128 words in order to be able to perform two simultaneous read operations without increasing the read latency of the memory. Network port A 222, network port B 224, ALU function port 232, control logic 214 and 234, and memory function port 212 each have configuration memories (not shown) associated with them. The configuration memories of these elements are distributed and are coupled to a ConfigurationNetwork Interface (CNI) (not shown) in one embodiment. These connections may be serial connections but are not so limited. The CNI couples all configuration memories associated with network port A 222, network port B 224, ALU function port 232, control logic 214 and 234, and memory function port 212 thereby controlling these configuration memories. The distributed configuration memory stores configuration words that control the configuration of the interconnections. The configuration memory also stores configuration information for the control architecture. Optionally it can also be a multiple context memory that receives context selecting signals which have been broadcast globally and locally from a variety of sources. Figure 3 is a data flow diagram of the MCPE of one embodiment. The structure of each MCPE allows for a great deal of flexibility when using theMCPEs to create networked processing structures. The major components of the MCPE include static random access memory (SRAM) main memory 302,ALU with multiplier and accumulate unit 304, network ports 306, and control logic 308. The solid lines mark data flow paths while the dashed lines mark control paths; all of the lines are one or more bits wide in one embodiment. There is a great deal of flexibility available within the MCPE because most of the major components may serve several different functions depending on the MCPE configuration. The MCPE main memory 302 is a group of 256 eight bit SRAM cells that can operate in one of four modes. It takes in up to two eight bit addresses fromA and B address/data ports, depending upon the mode of operation. It also takes in up to four bytes of data, which can be from four floating ports, the B address/data port, the ALU output, or the high byte from the multiplier. The main memory 302 outputs up to four bytes of data. Two of these bytes, memory A and B, are available to the MCPE's ALU and can be directly driven onto the level 2 network. The other two bytes, memory C and D, are only available to the network. The output of the memory function port 306 controls the cycle-by-cycle operation of the memory 302 and the internal MCPE data paths as well as the operation of some parts of the ALU 304 and the control logic 308. The MCPE main memory may also be implemented as a static register file in order to save power. Each MCPE contains a computational unit 304 comprised of three semi independent functional blocks. The three semi-independent functional blocks comprise an eight bit wide ALU, an 8x8 to sixteen bit multiplier, and a sixteen bit accumulator. The ALU block, in one embodiment, performs logical, shift, arithmetic, and multiplication operations, but is not so limited. The ALU function port 306 specifies the cycle-by-cycle operation of the computational unit. The computational units in orthogonally adjacent MCPEs can be chained to form wider-word data paths. The MCPE network ports 306 connect the MCPE network to the internalMCPE logic (memory, ALU, and control). There are eight network ports 306 in each MCPE, each serving a different set of purposes. The eight network ports 306 comprise two address/data ports, two function ports, and four floating ports. The two address/data ports feed addresses and data into the MCPE memories and ALU. The two function ports feed instructions into the MCPE logic. The four floating ports may serve multiple functions. The determination of what function they are serving is made by the configuration of the receivers of their data. The MCPEs of one embodiment are the building blocks out of which more complex processing structures may be created. The structure that joins theMCPE cores into a complete array in one embodiment is actually a set of several mesh-like interconnect structures. Each interconnect structure forms a network, and each network is independent in that it uses different paths ; but the networks do join at the MCPE input switches. The network structure of-, one embodiment of the present invention is comprised of a local area broadcast network (level 1), a switched interconnect network (level 2), a shared bus network (level 3), and a broadcast, or configuration, network. Figure 4 shows the major components of the MCPE control logic structure of one embodiment. The Control Tester 602 takes the output of theALU for two bytes from floating ports 604 and 606, plus the left and right carryout bits, and performs a configurable test on them. The result is one bit indicating that the comparison matched. This bit is referred to as the control bit. This Control Tester 602 serves two main purposes. First, it acts as a programmable condition code generator testing the ALU output for any condition that the application needs to test for. Secondly, since these control bits can be grouped and sent out across the level 2 and 3 networks, this unit can be used to perform a second or later stage reduction on a set of control bits/data generated by other MCPE's. The level 1 network 608 carries the control bits. The level 1 network 608 consists of direct point-to-point communications between every MCPE and its 12 nearest neighbors. Thus, each MCPE will receive 13 control bits (12 neighbors and it's own) from the level 1 network. These 13 control bits are fed into the Control Reduce block 610 and the BFU input ports 612. The ControlReduce block 610 allows the control information to rapidly effect neighboringMCPEs. The MCPE input ports allow the application to send the control data across the normal network wires so they can cover long distances. In addition the control bits can be fed into MCPEs so they can be manipulated as normal data. The Control Reduce block 610 performs a simple selection on either the control words coming from the level 1 control network, the level 3 network, ar two of the floating ports. The selection control is part of the MCPE configuration. The Control Reduce block 610 selection results in the output of five bits. Two of the output bits are fed into the MCPE configuration controller 614. One output bit is made available to the level 1 network, and one output bit is made available to the level 3 network. The MCPE configuration controller 614 selects on a cycle-by-cycle basis which context, major or minor, will control the MCPE's activities. The controller consists of a finite state machine (FSM) that is an active controller and not just a lookup table. The FSM allows a combination of local and global control over time that changes. This means that an application may run for a period based on the local control of the FSM while receiving global control signals that reconfigure the MCPE, or a block of MCPEs, to perform different functions during the next clock cycle. The FSM provides for local configuration and control by locally maintaining a current configuration context for control of the MCPE. The FSM provides for global configuration and control by providing the ability to multiplex and change between different configuration contexts of the MCPE on each different clock cycle in response to signals broadcast over a network. This configuration and control of the MCPE is powerful because it allows an MCPE to maintain control during each clock cycle based on a locally maintained configuration context while providing for concurrent global on-thefly reconfiguration of each MCPE. This architecture significantly changes the area impact and characterization of an MCPE array while increasing the efficiency of the array without wasting other MCPEs to perform the configuration and control functions. Figure 5 is the FSM 502 of the MCPE configuration controller of one embodiment. In controlling the functioning of the MCPE, control information 504 is received by the FSM 502 in the form of state information from at least one surrounding MCPE in the networked array. This control information is in the form of two bits received from the Control Reduce block of the MCPE control logic structure. In one embodiment, the FSM 502 also has three state bits that directly control the major and minor configuration contexts for the particular MCPE. The FSM 502 maintains the data of the current MCPE configuration by using a feedback path 506 to feed back the current configuration state of the MCPE of the most recent clock cycle. The feedback path 506 is not limited to a single path. The FSM 502 selects one of the available configuration memory contexts for use by the corresponding MCPE during the next clock cycle in response to the received state information from the surrounding MCPEs and the current configuration data. This selection is output from the FSM 502 in the form of a configuration control signal 508.The selection of a configuration memory context for use during the next clock cycle occurs, in one embodiment, during the execution of the configuration memory context selected for the current clock cycle. Figure 6 is a data flow system diagram of the preparation of run time systems tables by the temporal automatic place and route (TAPR) of one embodiment. In step 650 an application program in source code is selected.In the Figure 6 embodiment the application program is written in a procedural oriented language, C, but in other embodiments the application program could be written in another procedural oriented language, in an object oriented language, or in a dataflow language. The source code of step 650 is examined in decision step 652. Portions of the source code are separated into overhead code and kernel code sections.Kernel code sections are defined as those routines in the source code which may be advantageously executed in a hardware accelerator. Overhead code is defined as the remainder of the source code after all the kernel code sections are identified and removed. In one embodiment, the separation of step 652 is performed by a software profiler. The software profiler breaks the source code into functions.In one embodiment, the complete source code is compiled and then executed with a representative set of test data. The profiler monitors the timing of the execution, and then based upon this monitoring determines the function or functions whose execution consumes a significant portion of execution time.Profiler data from this test run may be sent to the decision step 652. The profiler identifies these functions as kernel code sections. In an alternate embodiment, the profiler examines the code of the functions and then identifies a small number of functions that are anticipated to consume a large portion of the execution runtime of the source code. These functions may be identified by attributes such as having a regular structure, having intensive mathematical operations, having a repeated or looped structure, and having a limited number of inputs and outputs. Attributes which argue against the function being identified as kernel sections include numerous branches and overly complex control code. In an alternate embodiment, the compiler examines the code of the functions to determine the size of arrays traversed and the number of variables that are live during the execution of a particular block or function. Code that has less total memory used than that in the hardware accelerators and associated memories are classified as kernel code sections. The compiler may use well-understood optimization methods such as constant propagation, loop induction, in-lining and intra-procedural value range analysis to infer this information from the source code. Those functions that are identified as kernel code section by one of the above embodiments of profiler, are then labeled, in step 654, as kernel code sections. The remainder of the source code is labeled as overhead code. In alternate embodiments, the separation of step 652 may be performed manually by a programmer. In step 656, the Figure 6 process creates hardware designs for implementing the kernel code sections of step 654. These designs are the executable code derived from the source code of the kernel code sections.Additionally, the designs contain any necessary microcode or other fixedconstant values required in order to run the executable code on the target hardware. The designs are not compiled in the traditional sense. Instead they are created by the process of step 656 which allows for several embodiments. In one embodiment, the source code of the kernel code section is compiled automatically by one of several compilers corresponding to the available hardware accelerators. In an alternate embodiment, a programmer may manually realize the executable code from the source code of the kernel code sections, as shown by the dashed line from step 656 to step 650. In a third embodiment the source code of the kernel code sections is compiled automatically for execution on both the processors and the hardware accelerators, and both versions are loaded into the resulting binary. In a fourth embodiment, a hardware accelerator is synthesized into a custom hardware accelerator description. In step 658 the hardware designs of step 656 are mapped to all available target hardware. The target hardware may be a processor, an MCPE, or a defined set of MCPEs called a bin. A bin may contain any number of MCPEs from one to the maximum number of MCPEs on a given integrated circuit.However, in one embodiment a quantity of 12 MCPEs per bin is used. TheMCPEs in each bin may be geometrically neighboring MCPEs, or the MCPEs may be distributed across the integrated circuit. However, in one embodiment the MCPEs of each bin are geometrically neighboring. In the temporal automatic place and route (TAPR) of step 660, the microcode created in step 656 may be segmented into differing contextdependent portions. For example, a given microcode design may be capable of loading and executing in either lower memory or upper memory of a given bin.The TAPR of step 660 may perform the segmentation in several different ways depending upon the microcode. If, for example, the microcode is flat, then the microcode may only be loaded into memory in one manner. Here no segmentation is possible. Without segmentation one microcode may not be background loaded onto a bin's memory. The bin must be stalled and the microcode loaded off-line. In another example, memory is a resource which may be controlled by the configuration. It is possible for the TAPR of step 660 to segment microcode into portions, corresponding to differing variants, which correspond to differing contexts. For example, call one segmented microcode portion context 2 and another one context 3. Due to the software separation of the memory of the bin it would be possible to place the context 2 and context 3 portions into lower memory and upper memory, respectively. This allows background loading of one portion while another portion is executing. The TAPR of step 660 supports two subsequent steps in the preparation of the source code for execution. In step 664, a table is prepared for subsequent use by the run time system. In one embodiment, the table of step664 contains all of the three-tuples corresponding to allowable combinations of designs (from step 656), bins, and variants. A variant of a design or a bin is any differing implementation where the functional inputs and the outputs are identical when viewed from outside. The variants of step 664 may be variants of memory separation, such as the separation of memory into upper and lower memory as discussed above. Other variants may include differing geometric layouts of MCPEs within a bin, causing differing amounts of clock delays being introduced into the microcodes, and also whether or not the MCPEs within a bin are overlapping. In each case a variant performs a function whose inputs and outputs are identical outside of the function. The entries in the table of step 664 point to executable binaries, each of which may each be taken and executed without further processing at run time. The table of step 664 is a set of all alternative execution methods available to the run time system for a given kernel section. The other step supported by the TAPR of step 660 is the creation of configurations, microcodes, and constants of step 662. These are the executable binaries which are pointed to by the entries in the table of step 664. Returning now to decision step 652, the portions of the source code which were previously deemed overhead are sent to a traditional compiler 670 for compilation of object code to be executed on a traditional processor. Alternately, the user may hand code the source program into the assembly language of the target processor. The overhead C code may also be nothing more than calls to kernel sections. The object code is used to create object code files at step 672. Finally, the object code files of step 672, the configurations, microcode, and constants of step 662, and table of step 664 are placed together in a format usable by the run time system by the system linker of step 674. Note that the instructions for the process of Figure 6 may be described in software contained in a machine-readable medium. A machine-readable medium includes any mechanism for storing or transmitting information in a form readable by a machine (e. g. a computer). For example, a machinereadable medium includes read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; and electrical, optical, acoustical, or other form of propagated signals (e. g. carrier waves, infrared signals, digital signals, etc.). Figure 7A is a block diagram of exemplary MCPEs, according to one embodiment. Chip architecture 700 includes processing elements processor A 702, processor B 720, bin 0 706, bin 1 708, and bin 2 710. In the Figure 7A embodiment, the function of hardware accelerator may be assigned to theMCPEs, either individually or grouped into bins. A run-time kernel (RTK) 704 apportions the executable software among these processing elements at the time of execution. In the Figure 7A embodiment, processor A 702 or processorB 720 may execute the overhead code identified in step 652 and created as object files in step 672 of the Figure 6 process. Bin 0 706, bin 1 708, and bin 2 710 may execute the kernel code identified in step 652. Each processing element processor A 702 and processor B 720 is supplied with an instruction port, instruction port 724 and instruction port 722, respectively, for fetching instructions for execution of overhead code. Bin 0 706, bin 1 708, and bin 2 710 contain several MCPEs. In one embodiment, each bin contains 12 MCPEs. In alternate embodiments, the bins could contain other numbers of MCPEs, and each bin could contain a different number of MCPEs than the other bins. In the Figure 7A embodiment, bin 0 706, bin 1 708, and bin 2 710 do not share any MCPEs, and are therefore called non-overlapping bins. In other embodiments, bins may share MCPEs. Bins which share MCPEs are called overlapping bins. RTK 704 is a specialized microprocessor for controlling the configuration of chip architecture 700 and controlling the loading and execution of software in bin 0 706, bin 1 708, and bin 2 710. In one embodiment, RTK 704 may move data from data storage 728 and configuration microcode from configuration microcode storage 726 into bin 0 706, bin 1 708, and bin 2 710 in accordance with the table 730 stored in a portion of data storage 728. In alternate embodiments, RTK 704 may move data from data storage 728, without moving any configuration microcode from configuration microcode storage 726. Table 730 is comparable to that table created in step 664 discussed in connection with Figure 6 above. Paragraph #2A :The RTK may also move data to and from 10 port NNN and 10 port MMM into the data memory 728. [If I didn't comment earlier, the RTK does not move data to processor A or processor B-page 19 line 2] Figure 7B is a block diagram of exemplary digital signal processors (DSP), according to one embodiment. Chip architecture 750 includes processing elements processor A 752, processor B 770, DSP 0 756, DSP 1 758, and DSP 2 760. In the Figure 7B embodiment, the function of hardware accelerator may be assigned to the DSPs. In other embodiments, DSP 0 756,DSP 1 758, and DSP 2 760 may be replaced by other forms of processing cores.A run-time kernel (RTK) 754 apportions the executable software among these processing elements at the time of execution. In the Figure 7B embodiment, processor A 752 or processor B 770 may execute the overhead code identified in step 652 and created as object files in step 672 of the Figure 6 process. DSP 0 756, DSP 1 758, and DSP 2 760 may execute the kernel code identified in step 652. Each processing element processor A 702 and processor B 720 is supplied with an instruction port, instruction port 724 and instruction port 722, respectively, for fetching instructions for execution of overhead code. One difference between the Figure 7A and Figure 7B embodiments is that the Figure 7B embodiment lacks an equivalent to the configuration microcode storage 726 of Figure 7A. No configuration microcode is required as the DSPs of Figure 7B have a fixed instruction set (microcode) architecture. RTK 754 is a specialized microprocessor for controlling the configuration of chip architecture 750 and controlling the loading and execution of software in DSP 0 756, DSP 1 758, and DSP 2 760. In one embodiment, RTK 754 may move data from data storage 778 into DSP 0 756, DSP 1 758, and DSP 2 760 in accordance with the table 780 stored in a portion of data storage 778. Table 780 is comparable to that table created in step 664 discussed in connection with Figure 6 above. Figure 8 is a diagram of the contents of an exemplary run time kernel (RTK) 704, according to one embodiment. RTK 704 contains several functions in microcontroller form. In one embodiment, these functions include configuration direct memory access (DMA) 802, microcode DMA 804, arguments DMA 806, results DMA 808, and configuration network source 810. RTK 704 utilizes these functions to manage the loading and execution of kernel code and overhead code on chip architecture 700. Configuration DMA 802, microcode DMA 804, arguments DMA 806, and results DMA 808 each comprise a simple hardware engine for reading from one memory and writing to another. Configuration DMA 802 writes configuration data created by the TAPR660 in step 622 of the Figure 6 process. This configuration data configures g bin to implement the behavior of the kernel code section determined in the table-making step 664 of Figure 6. The configuration data transfers are under the control of RTK 704 and the configuration data itself is entered in table 730. Configuration data is unchanged over the execution of the hardware accelerator. Microcode DMA 804 writes microcode data for each configuration into the bins. This microcode further configures the MCPEs with instruction data that allows the function of the hardware accelerator to be changed on a cycle by-cycle basis while the hardware accelerator is executing. Each bin may have multiple microcode data sets available for use. Microcode data is stored in the configuration microcode storage 726 and written into memory within the MCPEs of each bin by microcode DMA 804. Arguments DMA 806 and results DMA 808 set up transfers of data from data memory 728 into one of the bins bin 0 706, bin 1 708, or bin 2 710. Argument data are data stored in a memory by a general purpose processor which requires subsequent processing in a hardware accelerator. The argument data may be considered the input data of the kernel code sections executed by the bins. Results data are data sent from the hardware accelerator to the general purpose processor as the end product of a particular kernel code section's execution in a bin. The functional units arguments DMA 806 and results DMA 808 transfer this data without additional processor intervention. Configuration network source 810 controls the configuration network.The configuration network effects the configuration of the MCPEs of the bins bin 0 706, bin 1 708 and bin 2 710, and of the level 1, level 2, and level 3 interconnect described in Figure 3 and Figure 4. Configuration of the networks enables the RTK to control the transfer of configuration data, microcode data, arguments data, and results data amongst the data memory 728, configuration memory 726, and the MCPEs of bin 0 706, bin 1 708 and bin 2 710. In cases where there are multiple contexts, RTK 704 may perform background loading of microcode and other data while the bins are executing kernel code. An example of this is discussed below in connection with Figure 11. Figure 9A is a process chart showing the mapping of an exemplary single threaded process into kernel segments, according to one embodiment.Source code 1 900 and source code 2 960 are two exemplary single threaded processes which may be used as the C source code 650 of the Figure 6 process.In one embodiment, source code 1 900 may contain overhead code 910,914, 918,922,926, and 930, as well as kernel code 912, 916,920,924, and 928.The identification of the overhead code and kernel code sections may be performed in step 652 of the Figure 6 process. Overhead code 910,914,918, 922,926, and 930 may be executed in processor A 702 or processor B 720 of the Figure 7a embodiment. Kernel code 912,916,920,924, and 928 may be executed in bin 0 706, bin 1 708, or bin 2 710 of the Figure 7a embodiment.The TAPR 660 of the Figure 6 process may create the necessary configurations and microcode for the execution of the kernel code 912,916,920,924, and 928. Figure 9B is a process chart showing the allocation of the kernel segments of Figure 9A into multiple bins. Utilizing the table 780 produced in step 664 of the Figure 6 process, RTK 704 may load and execute the overhead code 910,914,918,922,926, and 930 and the kernel code 912,916,920, 924, and 928 into an available processor or bin as needed. In the exemplaryFigure 9B embodiment, RTK 704 loads the first overhead code 910 into processor A 702 for execution during time period 970. RTK 704 then loads the first kernel code 912 into bin 0 706 for execution during time period 972. Depending upon whether overhead code 914 requires the completion of kernel code 912, RTK 704 may load overhead code 914 into processor A 702 for execution during time period 974. Similarly, depending upon whether kernel code 916 requires the completion of overhead code 914 or kernel code 910,RTK 704 may load kernel code 916 into bin 1 708 for execution during time period 976. Depending upon requirements for completion, RTK 704 may continue to load and execute the overhead code and kernel code in an overlapping manner in the processors and the bins. When overhead code or kernel code require the completion of a previous overhead code or kernel code, RTK 704 may load the subsequent overhead code or kernel code but delay execution until the required completion. Figure 9C is a process chart showing the allocation of the kernel segments of two processes into multiple bins. In the Figure 9C embodiment, source code 1 900 and source code 2 960 may be the two exemplary single threaded processes of Figure 9A. Prior to the execution of source code 1 900 and source code 2 960 in Figure 9C, the kernel code and overhead code sections may be identified and processed in the Figure 6 process or in an equivalent alternate embodiment process. Utilizing the table 730 for source code 1 900, produced in step 664 of the Figure 6 process, RTK 704 may load and execute the overhead code 910,914,918, and 922, and the kernel code 912,916, and 920 into an available processor or bin as needed. Similarly, an equivalent table (not shown) may be prepared for source code 2 960. In theFigure 9C embodiment, by utilizing this equivalent table for source code 2 960,RTK 704 may load and execute the overhead code 950,954, and 958, and the kernel code 952 and 956, into an available processor or bin as needed. In the exemplary Figure 9C embodiment, RTK 704 loads the first overhead code 910,960 sections into processor A 702 and processor B 720, respectively, for execution in time periods 980 and 962, respectively. When overhead code 910 finishes executing, RTK 704 may load kernel code 912 into bin 0 706 for execution in time period 982. When kernel code 912 finishes executing, RTK 704 may load the next overhead code 914 into an available processor such as processor B 720 during time period 948. When overhead code 950 finishes executing, RTK 704 may load kernel code 952 into available bin 1 708 for execution during time period 964. When kernel code 952 finishes executing RTK 704 may load the next overhead code 954 into processor A 702 for execution during time period 966. Therefore, as shown in Figure 9C, multiple threads may be executed utilizing the designs, bins, and tables of various embodiments of the present invention. The overhead code and kernel code sections of the several threads may be loaded and executed in an overlapping manner among the several processors and bins available. Figure 10 is an exemplary TAPR table, according to one embodiment.The TAPR table of Figure 10 is a three dimensional table, containing entries that are three-tuples of the possible combinations of bins, designs, and variants. The TAPR table contains more than just a recitation of the designs of the kernel code segments mapped into the bins (hardware accelerators).Instead, the TAPR table includes the dimension of variants of the bins. Each combination of designs and bins may have multiple variants. Variants perform the identical function from the viewpoint of the inputs and outputs, but differ in implementation. An example is when bins are configured from a 3 by 4 array of MCPEs as versus a 4 by 3 array of MCPEs. In this case differing timing requirements due to differing path lengths may require separate variants in the configuration and microcode data of the hardware acceleratorIn one embodiment, these variants may take the form of different microcode implementations of the design, or the variants may be differing signal routing paths among the MCPEs of the bins. Two additional exemplary variants are discussed below in connection with Figure 11 and Figure 12. Figure 11 is a diagram of a first exemplary variant of a design, according to one embodiment. Memory available to a bin is a resource that may be controlled by the configuration. In this embodiment, bin 0 706 may have a memory that is logically partitioned into a lower memory 1104 and an upper memory 1102. Each memory area, for example upper memory 1102 and lower memory 1104, may be running a different context. For example, there could be a context 2 running in upper memory 1102 and an alternate context 3 loaded in lower memory 1104. Bin 0 706 is configured in accordance with a design, but depending upon how the design is loaded in memory certain instructions such as jump and load may have absolute addresses embedded in them. Therefore the design may have a variant for loading in upper memory 1102 under the control of context 2 and a second variant for loading in lower memory 1104 under the control of context 3. Having multiple variants in this manner advantageously allows any run-time engine such as RTK 704 to load the microcode for one variant in either upper memory 1102 or lower memory 1104 while execution is still proceeding in the alternate memory space under a different context. Figure 12 is a diagram of a second exemplary variant of a design, according to another embodiment. The memory available to bin 1 708 may be in two physically distinct areas on the chip. In Figure 12 one section of memory may be at physical location 1202 with data path 1212, and another section of memory may be at physical location 1204 with data path 1214. If data path 1214 is physically longer than data path 1212 then it may be necessary to insert additional clock cycles for a given design to run on bin 1 708 from memory at physical location 1202 in comparison with physical location 1204. Here the two variants differ in the number of internal wait states in the microcode of the design. Figure 13 is a diagram of an exemplary logical MCPE architecture 1300, according to one embodiment. Included within architecture 1300 are main processor 1304, run time kernel (RTK) processor 1316, an instruction memory (IMEM) 1302, a processor data memory 1306 with attached DMA 1308, and a configuration memory 1310 with attached DMA 1312. RTK processor 1316 is connected to a control bus 1314, which controls the operation of DMA 1308 and DMA 1312. DMA 1308 in turn generates an argument bus 1318, andDMA 1312 in turn generates a configuration bus 1328. Architecture 1300 also includes several hardware accelerators 1320, 1330,1340. Each accelerator contains a local DMA for sending and receiving data to and from the argument bus 1318 and a DMA for receiving data from the configuration bus 1328. For example, accelerator 1320 has DMA 1322 for sending and receiving data to and from the argument bus 1318 and DMA 1324 for receiving data from the configuration bus 1328. In the Figure 13 embodiment, argument bus 1318 is a bi-directional bus that may carry instruction data, argument data, and results data. Figure 14 is a diagram of an exemplary logical processor-based architecture, according to one embodiment. Included within architecture 1400 are main processor 1404, run time kernel (RTK) processor 1416, an instruction memory (IMEM) 1402 with attached DMA 1412, and a processor data memory 1406 with attached DMA 1408. RTK processor 1416 generates a control bus 1414, which controls the operation of DMA 1408,1412. DMA 1408 in turn generates an argument bus 1418, and DMA 1412 in turn generates an instruction bus 1428. Architecture 1400 also includes several DSPs 1420,1430,1440. EachDSP is connected to a DMA controller for receiving argument data from the argument bus 1418 and a data cache for temporary storage of the argument data. Each DSP is also connected to a DMA controller for receiving instruction data from the instruction bus 1418 and an instruction cache for temporary storage of the instruction data. Both sets of DMA controller receive control from the control bus 1414. For example, DSP 1420 has DMA controller 1428 for receiving data from the argument bus 1418 and data cache 1426 for temporary storage of the argument data. DSP 1420 also has DMA controller 1422 for receiving data from the instruction bus 1428 and instruction cache 1424 for temporary storage of the instruction data. In the Figure 14 embodiment, argument bus 1418 carries argument data but does not carry instruction data. Figure 15 is a flowchart of processor functions, according to one embodiment. The flowchart may describe operations of a main processor, such as the main processor 1304 of Figure 13. In step 1520, the main processor executes a subthread, which may be a section of overhead code such as overhead code 910 of Figure 9C. After the subthread has finished executing, in step 1504 the processor assembles the arguments necessary for a hardware accelerator, such as hardware accelerator 1320 of Figure 13. Then in step 1506 the processor sends a packet containing the arguments and other related data to a run time kernel processor, such as RTK processor 1316 of Figure 13.The RTK may send the packet containing arguments over the argument bus to a hardware accelerator. In step 1508 the main processor selects a subsequent subthread for execution. This subthread may be another section of overhead code. However, the main processor does not immediately begin execution of this subthread. In decision step 1510, the main processor determines whether or not the results are ready from the hardware accelerator. If yes, then step 1502 is entered and the next subthread is executed. If no, however, the main processor then loads another thread and different subthread in step 1508. In this manner the main processor continuously may select and execute only those subthreads whose arguments are ready. Figure 16 is a flowchart of the hardware accelerator behavior, according to one embodiment. The flowchart may describe the operations of a hardware accelerator, such as hardware accelerator 1320 of Figure 13 or DSP 1420 ofFigure 14. In step 1602, the hardware accelerator configures itself for operation by executing code and selecting configuration control information sent via a configuration bus, such as the configuration bus 1328 of Figure 13. Step 1602 finishes by loading a new and subsequent set of code and configuration control information should this be required during execution. Then in step 1604 the hardware accelerator waits for the arguments data to be sent from a main processor memory under control of a run time kernel processor. In step 1606 the arguments are loaded from a main processor memory into the hardware accelerator via DMA. In one embodiment, the arguments are loaded from a processor data memory 1306 into a local DMA 1322 of hardware accelerator 1320 via an argument bus 1318 of Figure 13. The argument bus 1318 may be under the control of a run time kernel processor, such as the RTK processor 1316. The hardware accelerator then executes its code, including kernel code segments. Then, in step 1608, the resulting arguments are sent back to the main processor via DMA. In one embodiment, the arguments are loaded back into a processor data memory 1306 from a local DMA 1322 of hardware accelerator 1320 via an argument bus 1318 of Figure 13. Again the argument bus 1318 may be under the control of a run time kernel processor, such as the RTK processor 1316. Finally, in step 1608 the hardware accelerator waits for a"go"signal to input new configuration data and code from a configuration bus, such as the configuration bus 1328 of Figure 13. After receiving a"go"signal, the process begins again at step 1602. Figure 17 is a flowchart for a RTK processor, according to one embodiment. The flowchart may describe the operations of a run time kernel processor, such as RTK processor 1316 of Figure 13. In decision step 1702, the run time kernel processor examines the request queue and determines whether the request queue is empty. This request queue may contain kernel code segments of the Figure 16 process. If the request queue is not empty, then there are kernel code segments which may be executed. In step 1704, the run time kernel processor loads a request from the queue written by a main processor, such as main processor 1304 of Figure 13. Then in step 1706 the run time kernel processor retrieves the configuration information needed to support execution of the requested kernel code segment. In step 1708 this information use used to build a new entry in a pending kernel code execution table. In step 1710 a hardware accelerator, which may be a bin of figure 7A, is selected for executing the kernel code segment. The identification of the selected hardware accelerator is added to the pending kernel code execution table. Then in step 1712 the execution is started by initiating the DMA transfer to the hardware accelerator. The process then returns to the decision step 1702. If, however, the request queue is determined in step 1702 to be empty,then the process enters decision step 1720. In step 1720 the run time kernel processor determines whether a DMA is pending. If a DMA is pending, then the process enters decision step 1722. In decision step 1722, the run time kernel processor polls the DMA devices to determine whether the DMA is done.If not, then the process loops back to decision step 1720. If, however, in step 1722 the DMA devices are done, then, in step 1724, the value of state in the pending kernel code execution table is incremented. In alternate embodiments, the polling may be replaced by an interrupt driven approach. Then in step 1726 a subsequent DMA may be started, and the process returns to decision step 1720. If, however, in step 1720 it is determined that no DMA is pending, then the process exits through a determination of other pending input/output activity in the flexible processing environment. In decision step 1730 it is determined whether any such pending input/output activity is present. If so, then in step 1732 the input/output activity is serviced. If, however, no input/output activity is present, then the process returns to the determination of the request queue status in determination step 1702. Figure 18 is a table 1800 to support the operation of the RTK processor, according to one embodiment. In the Figure 18 embodiment, the table 1800 may serve as the pending kernel code execution table used in the Figure 17 process. The table 1800 includes entries for hardware identification 1802, state 1804, hardware accelerator (bin) 1806, DMA pending status 1808, and unit done status 1810. An exemplary entry in table 1800 is entry 1820. Entry 1820 indicates that the hardware accelerator whose hardware identification is 3 is currently in state 4 and being invoked on hardware accelerator (bin) 3 with DMA activity still pending. The state entry of table 1800 indicates a set of DMAs waiting to be performed in order to handle the configuration and argument loading onto the hardware accelerator and subsequent return back to data memory for processing by the main processor. In one embodiment, states numbered 1 through n may indicate that there should be a load of configuration, and static memory. States numbered n through m may indicate there should be an onload of arguments from the main processors memory, these states then existing until the unit completes execution of the kernel code segment. Finally, states numbered m through p may indicate a result return back to data memory for processing by the main processor. In the foregoing specification, the invention has been described with reference to specific embodiments thereof. It will however be evident that various modifications and changes can be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. Therefore, the scope of the invention should be limited only by the appended claims. |
A method and apparatus for planarizing a microelectronic substrate. In one embodiment, the microelectronic substrate is engaged with a planarizing medium that includes a planarizing pad and a planarizing liquid, at least one of which includes a chemical agent that removes a corrosion-inhibiting agent from discrete elements (such as abrasive particles) of the planarizing medium and/or impedes the corrosion-inhibiting agent from coupling to the discrete elements. The chemical agent can act directly on the corrosion-inhibiting agent or can first react with a constituent of the planarizing liquid to form an altered chemical agent, which then interacts with the corrosion-inhibiting agent. Alternatively, the altered chemical agent can control other aspects of the manner by which material is removed from the microelectronic substrate, for example, the material removal rate. |
CLAIMS 1. A method for planarizing a microelectronic substrate, comprising: engaging the microelectronic substrate with a planarizing medium having a planarizing liquid and a planarizing pad with a planarizing surface, at least one of the planarizing liquid and the planarizing pad having a selected chemical agent; separating a passivating agent from a discrete element of the planarizing medium with the selected chemical agent and/or impeding the passivations agent from coupling to the discrete element of the planarizing medium with the selected chemical agent; and moving at least one of the planarizing pad and the microelectronic substrate relative to the other to remove material from the microelectronic substrate. 2. The method of claim 1, further comprising selecting the passivating agent to include a corrosion-inhibiting agent. 3. The method of claim 1 wherein separating and/or impeding the passivating agent includes chemically reacting the passivating agent with the selected chemical agent and dissolving the passivating agent in the planarizing liquid. 4. The method of claim 1 wherein separating and/or impeding the passivating agent includes chemically reacting the passivating agent with the selected chemical agent, breaking the passivating agent into constituents, and dissolving the constituents in the planarizing liquid. 5. The method of claim 1 wherein the planarizing pad includes abrasive elements fixedly dispersed therein and separating and/or impeding the passivating agent includes separating the passivating agent from the abrasive elements and/or restricting the passivating agent from attaching to the abrasive elements. 6. The method of claim 1, further comprising reacting the selected chemical agent with at least one constituent of the planarizing liquid to form an altered chemical agent and reacting the altered chemical agent with the passivating agent. 7. The method of claim 1, further comprising selecting the selected chemical agent to include phosphoric acid. 8. The method of claim 1 wherein the passivating agent includes benzoltriazole and separating and/or impeding the passivating agent includes chemically reacting the benzoltriazole with the selected chemical agent and dissolving the benzoltriazole in the planarizing liquid. 9. The method of claim 1, further comprising selecting the selected chemical agent to include an etchant. 10. The method of claim 1 wherein the planarizing pad includes an abrasive element having a first zeta potential and the microelectronic substrate includes a constituent having a second zeta potential, further comprising selecting the planarizing liquid to have a pH such that both the inhibiting agent and the abrasive element have a charge with a similar polarity at that pH such that both the inhibiting agent and the abrasive element have a charge with a similar polarity at that pH. 11. The method of claim 1, further comprising selecting the planarizing liquid to have a pH of from about 6 to about 10. 12. The method of claim 1, further comprising selecting the planarizing liquid to have a pH of about 7. 13. A method for planarizing a microelectronic substrate, comprising : engaging the microelectronic substrate with a planarizing surface of a planarizing pad and moving at least one of the planarizing pad and the microelectronic substrate relative to the other to remove material from the microelectronic substrate; removing material from the planarizing pad as the one of the microelectronic substrate and the planarizing pad moves relative to the other to release a chemical agent from the planarizing pad; and separating a corrosion-inhibiting agent from a discrete element of the planarizing pad with the chemical agent and/or impeding the corrosion-inhibiting agent from coupling to the discrete element of the planarizing pad with the chemical agent. 14. The method of claim 13 wherein removing material from the planarizing pad includes abrading the material from the planarizing surface of the planarizing pad. 15. The method of claim 13, further comprising reacting the chemical agent with a constituent of the planarizing liquid to form an altered chemical agent, further wherein separating and/or impeding the corrosioninhibiting agent includes reacting the corrosion-inhibiting agent with the altered chemical agent. 16. The method of claim 13 wherein separating and/or impeding the corrosion-inhibiting agent includes chemically reacting the corrosion-inhibiting agent with the selected chemical agent and dissolving the corrosion-inhibiting agent in the planarizing liquid. 17. The method of claim 13 wherein separating and/or impeding the corrosion-inhibiting agent includes chemically reacting the corrosion-inhibiting agent with the selected chemical agent, breaking the corrosion-inhibiting agent into constituents, and dissolving the constituents in the planarizing liquid. 18. The method of claim 13, further comprising selecting the chemical agent to include phosphoric acid. 19. The method of claim 13 wherein the corrosioninhibiting agent includes benzoltriazole and separating and/or impeding the corrosion-inhibiting agent includes chemically reacting the benzoltriazole with the selected chemical agent and dissolving the benzoltriazole in the planarizing liquid. 20. The method of claim 13, further comprising selecting the chemical agent to include an etchant. 21. The method of claim 13 wherein the planarizing pad includes abrasive elements fixedly dispersed therein and separating and/or impeding the corrosion-inhibiting agent includes separating the corrosioninhibiting agent from the abrasive elements and/or restricting the corrosioninhibiting agent from attaching to the abrasive elements. 22. A method for planarizing a microelectronic substrate, comprising: engaging the microelectronic substrate with a planarizing surface of a planarizing pad and moving at least one of the planarizing pad and the microelectronic substrate relative to the other to remove material from the microelectronic substrate; releasing a first chemical agent embedded in the planarizing pad proximate to the planarizing surface by removing material from the planarizing pad as the one of the microelectronic substrate and the planarizing pad moves relative to the other; chemically transforming the first chemical agent into a second chemical agent in a chemical reaction external to the planarizing pad; and restricting an amount of corrosion-inhibiting agent chemically interacting with the planarizing pad by exposing the planarizing pad to the second chemical agent. 23. The method of claim 22 wherein restricting the amount of corrosion-inhibiting agent includes removing the corrosion-inhibiting agent from the planarizing pad with the second chemical agent. 24. The method of claim 22 wherein restricting an amount of corrosion-inhibiting agent includes preventing the corrosion-inhibiting agent from coupling to the planarizing pad. 25. The method of claim 22, further comprising selecting the first chemical agent to include phosphorus, chlorine, sulfur and/or nitrogen. 26. The method of claim 22, further comprising selecting the first chemical agent to produce a second chemical agent that includes phosphoric acid, hydrochloric acid, sulfuric acid and/or nitric acid. 27. The method of claim 22 wherein chemically transforming the first chemical agent includes combining the first chemical agent with a constituent of the planarizing liquid. 28. The method of claim 22 wherein the planarizing liquid includes water and chemically transforming the first chemical agent includes combining the first chemical agent with the water to form the second chemical agent. 29. The method of claim 22 wherein removing material from the planarizing pad includes abrading the material from the planarizing surface of the planarizing pad. 30. The method of claim 22 wherein restricting the corrosion-inhibiting agent includes chemically reacting the corrosioninhibiting agent with the second chemical agent and dissolving the corrosion-inhibiting agent in the planarizing liquid. 31. The method of claim 22 wherein the planarizing pad includes abrasive elements fixedly dispersed therein and restricting the corrosion-inhibiting agent includes separating the corrosion-inhibiting agent from the abrasive elements and/or restricting the corrosion-inhibiting agent from coupling to the abrasive elements. 32. A method for planarizing a microelectronic substrate, comprising: engaging the microelectronic substrate with a planarizing liquid and a planarizing surface of a planarizing pad having a first chemical agent embedded therein; moving at least one of the microelectronic substrate and the planarizing pad relative to the other to remove material from the microelectronic substrate; releasing the first chemical agent into the planarizing liquid by removing material from the planarizing pad and exposing the first chemical agent to the planarizing liquid; chemically reacting the first chemical agent with the planarizing liquid to form a second chemical agent chemically different than the first chemical agent; and controlling a rate and/or a manner of material removal from the microelectronic substrate by chemically affecting the planarizing pad with the second chemical agent. 33. The method of claim 32 wherein controlling a rate and/or a manner of material removal includes restricting an amount of a corrosion-inhibiting agent chemically interacting with the planarizing pad by chemically combining the corrosion-inhibiting agent with the second chemical agent. 34. The method of claim 32 wherein the second chemical agent includes an etchant and controlling a rate and/or a manner of material removal includes accelerating a removal rate of material from the microelectronic substrate by exposing the microelectronic substrate to the etchant. 35. The method of claim 32, further comprising selecting the first chemical agent to include phosphorus, chlorine, sulfur and/or nitrogen. 36. The method of claim 32, further comprising selecting the first chemical agent to produce a second chemical agent that includes phosphoric acid, hydrochloric acid, sulfuric acid and/or nitric acid. 37. The method of claim 32 wherein chemically reacting the first chemical agent includes combining the first chemical agent with a constituent of the planarizing liquid. 38. The method of claim 32 wherein the planarizing liquid includes water and chemically reacting the first chemical agent includes combining the first chemical agent with the water to form the second chemical agent. 39. The method of claim 32 wherein removing material from the planarizing pad includes abrading the material from the planarizing surface of the planarizing pad. 40. A planarizing medium for planarizing a microelectronic substrate, comprising : a planarizing pad having a planarizing surface configured to engage the microelectronic substrate; and a planarizing liquid adjacent to the planarizing pad, at least one of the planarizing pad and the planarizing liquid having discrete elements and a chemical agent selected to separate a passivating agent from the discrete elements and/or impede the passivating agent from attaching to the discrete elements during planarization of the microelectronic substrate. 41. The planarizing medium of claim 40 wherein the chemical agent is selected to include a corrosion-inhibiting agent. 42. The planarizing medium of claim 40 wherein the chemical agent is selected to dissolve the passivating agent in the planarizing liquid. 43. The planarizing medium of claim 40 wherein the chemical agent is selected to break the passivating agent into constituents that are soluble in the planarizing liquid. 44. The planarizing medium of claim 40 wherein the chemical agent includes phosphoric acid. 45. The planarizing medium of claim 40 wherein the chemical agent includes an etchant. 46. The planarizing medium of claim 40, wherein the discrete elements include abrasive elements fixedly dispersed in the planarizing pad, further wherein the chemical agent is selected to separate the passivating agent from the abrasive elements and/or restrict the passivating agent from coupling to the abrasive elements. 47. The planarizing medium of claim 40 wherein the chemical agent is selected to react with a constituent of the planarizing liquid to form an altered chemical agent that separates the passivating agent from the abrasive elements and/or restricts the passivating agent from attaching to the abrasive elements. 48. The planarizing medium of claim 40, further comprising the planarizing liquid, the planarizing liquid having a pH of from about 6 to about 10. 49. The planarizing medium of claim 40, further comprising the planarizing liquid, the planarizing liquid having a pH of about 7. 50. The planarizing medium of claim 40 wherein the planarizing pad includes an abrasive element having a first zeta potential and the microelectronic substrate includes a constituent having a second zeta potential, the planarizing liquid having a pH such that both the inhibiting agent and the abrasive element have a charge with a similar polarity. 51. A planarizing medium for planarizing a microelectronic substrate, comprising: a planarizing pad body having a planarizing surface configured to engage the microelectronic substrate, the planarizing pad body including a planarizing pad material that erodes during planarization of the microelectronic substrate; and a first chemical agent embedded in the planarizing pad body proximate to the planarizing surface, the first chemical agent being selected to undergo a chemical reaction with a planarizing liquid to form a second chemical agent different than the first chemical agent when erosion of the planarizing pad body exposes the first chemical agent to the planarizing liquid, the second chemical agent being selected to control a manner of material removal from the microelectronic substrate by affecting chemical properties of the planarizing pad body. 52. The planarizing medium of claim 51 wherein the second chemical agent is selected to restrict an amount of a corrosion-inhibiting agent chemically interacting with the planarizing pad by chemically combining the corrosion-inhibiting agent with the second chemical agent. 53. The planarizing medium of claim 51 wherein the second chemical agent is selected to include an etchant for accelerating a removal rate of material from the microelectronic substrate. 54. The planarizing medium of claim 51 wherein the first chemical agent is selected to include phosphorus, chlorine, sulfur and/or nitrogen. 55. The planarizing medium of claim 51 wherein the first chemical agent is selected to produce a second chemical agent that includes phosphoric acid, hydrochloric acid, sulfuric acid and/or nitric acid. 56. A planarizing medium for planarizing a microelectronic substrate, comprising: a planarizing pad having a planarizing surface configured to engage the microelectronic substrate, the planarizing pad including a planarizing pad material that erodes during planarization of the microelectronic substrate; and a chemical agent embedded in the planarizing pad proximate to the planarizing surface, the chemical agent being released when erosion of the planarizing pad exposes the chemical agent, the chemical agent being selected to separate a corrosion-inhibiting agent from discrete elements of the planarizing pad and/or impede the corrosion-inhibiting agent from coupling to the discrete elements of the planarizing pad during planarization of the microelectronic substrate. 57. The planarizing medium of claim 56 wherein the chemical agent includes an etchant. 58. The planarizing medium of claim 56 wherein the chemical agent includes phosphoric acid. 59. The planarizing medium of claim 56 wherein the chemical agent is a first chemical agent and is selected to undergo a chemical reaction upon being released from the planarizing pad to form a second chemical agent configured to remove a corrosion-inhibiting agent from the planarizing pad and/or prevent the corrosion-inhibiting agent from coupling to the planarizing pad during planarization of the microelectronic substrate. 60. A planarizing medium for planarizing a microelectronic substrate, comprising: a planarizing pad body having a planarizing surface configured to engage the microelectronic substrate, the planarizing pad body including a planarizing pad material that erodes during planarization of the microelectronic substrate; and a first chemical agent embedded in the planarizing pad body proximate to the planarizing surface, the first chemical agent being selected to undergo a chemical reaction with a planarizing liquid to form a second chemical agent when erosion of the planarizing pad body exposes the first chemical agent to the planarizing liquid, the second chemical agent being selected to at least restrict an inhibiting agent from chemically interacting with the planarizing pad body during planarization of the microelectronic substrate. 61. The planarizing medium of claim 60 wherein the inhibiting agent includes benzoltriazole and the second chemical agent is selected to restrict the benzoltriazole from coupling to the planarizing surface of the planarizing pad body. 62. The planarizing medium of claim 60, further comprising abrasive elements fixedly dispersed in the planarizing pad body, the second chemical agent being selected to at least restrict an inhibiting agent from chemically interacting with the abrasive elements during planarization of the microelectronic substrate. 63. The planarizing medium of claim 62 wherein the abrasive elements include alumina particles. 64. The planarizing medium of claim 60 wherein the microelectronic substrate includes copper, the inhibiting agent is selected to inhibit corrosion of the copper, and the second chemical agent is selected to remove the inhibiting agent from the planarizing pad. 65. The planarizing medium of claim 60 wherein the first chemical agent is selected from chlorine, phosphorus, sulfur and nitrogen. 66. The planarizing medium of claim 60 wherein the first chemical agent is selected to form a second chemical agent that includes hydrochloric acid, phosphoric acid, sulfuric acid and/or nitric acid. |
METHOD AND APPARATUS FOR CONTROLLING CHEMICALINTERACTIONS DURING PLANARIZATION OF MICROELECTRONICSUBSTRATESTECHNICAL FIELDThis invention relates to methods and apparatuses for controlling chemical interactions during planarization of microelectronic substrates, for example, controlling the interactions of a corrosion-inhibiting agent.BACKGROUNDMechanical and chemical-mechanical planarization processes (collectively"CMP") are used in the manufacturing of electronic devices for forming a flat surface on semiconductor wafers, field emission displays and many other microelectronic-device substrate assemblies. CMP processes generally remove material from a substrate assembly to create a highly planar surface at a precise elevation in the layers of material on the substrate assembly. Figure 1 schematically illustrates an existing web-format planarizing machine 10 for planarizing a substrate 12. The planarizing machine 10 has a support table 14 with a top-panel 16 at a workstation where an operative portion"A"of a planarizing pad 40 is positioned. The top-panel 16 is generally a rigid plate to provide a flat, solid surface to which a particular section of the planarizing pad 40 may be secured during planarization. The planarizing machine 10 also has a plurality of rollers to guide, position and hold the planarizing pad 40 over the top-panel 16. The rollers include a supply roller 20, first and second idler rollers 21a and 21b, first and second guide rollers 22a and 22b, and take-up roller 23. The supply roller 20 carries an unused or pre-operative portion of the planarizing pad 40, and the take-up roller 23 carries a used or post-operative portion of the planarizing pad 40. Additionally, the first idler roller 21a and the first guide roller 22a stretch the planarizing pad 40 over the top-panel 16 to hold the planarizing pad 40 stationary during operation. A motor (not shown) drives at least one of the supply roller 20 and the take-up roller 23 to sequentially advance the planarizing pad 40 across the top-panel 16. Accordingly, clean pre-operative sections of the planarizing pad 40 may be quickly substituted for used sections to provide a consistent surface for planarizing and/or cleaning the substrate 12. The web-format planarizing machine 10 also has a carrier assembly 30 that controls and protects the substrate 12 during planarization.The carrier assembly 30 generally has a substrate holder 32 to pick up, hold and release the substrate 12 at appropriate stages of the planarizing process.Several nozzles 33 attached to the substrate holder 32 dispense a planarizing solution 44 onto a planarizing surface 42 of the planarizing pad 40. The carrier assembly 30 also generally has a support gantry 34 carrying a drive assembly 35 that can translate along the gantry 34. The drive assembly 35 generally has an actuator 36, a drive shaft 37 coupled to the actuator 36, and an arm 38 projecting from the drive shaft 37. The arm 38 carries the substrate holder 32 via a terminal shaft 39 such that the drive assembly 35 orbits the substrate holder 32 about an axis B-B (as indicated by arrow "RI"). The terminal shaft 39 may also rotate the substrate holder 32 about its central axis C-C (as indicated by arrow"R2"). The planarizing pad 40 and the planarizing solution 44 define a planarizing medium that mechanically and/or chemically-mechanically removes material from the surface of the substrate 12. The planarizing pad 40 used in the web-format planarizing machine 10 is typically a fixedabrasive planarizing pad in which abrasive particles are fixedly bonded to a suspension material. In fixed-abrasive applications, the planarizing solution is a"clean solution"without abrasive particles because the abrasive particles are fixedly distributed across the planarizing surface 42 of the planarizing pad 40. In other applications, the planarizing pad 40 may be a non-abrasive pad without abrasive particles. The planarizing solutions 44 used with the non-abrasive planarizing pads are typically CMP slurries with abrasive particles and chemicals to remove material from a substrate. To planarize the substrate 12 with the planarizing machine 10, the carrier assembly 30 presses the substrate 12 against the planarizing surface 42 of the planarizing pad 40 in the presence of the planarizing solution 44. The drive assembly 35 then orbits the substrate holder 32 about the axis B-B and optionally rotates the substrate holder 32 about the axis CC to translate the substrate 12 across the planarizing surface 42. As a result, the abrasive particles and/or the chemicals in the planarizing medium remove material from the surface of the substrate 12. The CMP processes should consistently and accurately produce a uniformly planar surface on the substrate assembly to enable precise fabrication of circuits and photo-patterns. During the fabrication of transistors, contacts, interconnects and other features, many substrate assemblies develop large"step heights"that create a highly topographic surface across the substrate assembly. Yet, as the density of integrated circuits increases, it is necessary to have a planar substrate surface at several intermediate stages during substrate assembly processing because nonuniform substrate surfaces significantly increase the difficulty of forming sub-micron features. For example, it is difficult to accurately focus photo patterns to within tolerances approaching 0.1 micron on non-uniform substrate surfaces because sub-micron photolithographic equipment generally has a very limited depth of field. Thus, CMP processes are often used to transform a topographical substrate surface into a highly uniform, planar substrate surface. In some conventional CMP processes, the planarizing pad 40 engages a metal portion of the substrate 12 having a highly topographical surface with high regions and low regions. The planarizing liquid 44 can include solvents or other agents that chemically oxidize and/or etch the metal to increase the removal rate of the metal during planarization. During the planarizing process, the beneficial accelerating effect of the etchant can be reduced because the etchant can act at least as quickly on the low regions of the metal portion as the high regions of the metal portion. Accordingly, the low regions may recede from the high regions and reduce the planarity of the substrate 12. One approach addressing this potential drawback is to dispose a corrosion-inhibiting agent in the planarizing liquid 44 to restrict or halt the action of the etchant. This allows the mechanical interaction between the planarizing pad 40 and the substrate 12 to dominate the chemical interaction.Accordingly, the removal rate at the high regions of the microelectronic substrate 12 is generally higher than the low regions because the high regions have more mechanical contact with the planarizing pad 40 than do the low regions. As a result, the height differences between the high regions and the low regions are more quickly reduced. The inhibiting agent, however, can have adverse effects on the overall removal rate and other aspects of the planarizing process.SUMMARY OF THE INVENTIONThe present invention is directed toward methods and apparatuses for planarizing microelectronic substrates. A method in accordance with one aspect of the invention includes engaging the microelectronic substrate with a planarizing medium having a planarizing liquid and a planarizing pad with a planarizing surface, with at least one of the planarizing liquid and the planarizing pad having a selected chemical agent. The method further includes separating a passivating agent (such as a corrosion-inhibiting agent) from a discrete element (such as an abrasive particle) of the planarizing medium with the selected chemical agent and/or impeding the corrosion-inhibiting agent from coupling to the discrete element of the planarizing medium with the selected chemical agent. The method still further includes moving at least one of the planarizing pad and the microelectronic substrate relative to the other to remove material from the microelectronic substrate. In another aspect of the invention, the selected chemical agent can dissolve the corrosion-inhibiting agent or break the corrosion-inhibiting agent into constituents that dissolve in the planarizing liquid. The selected chemical agent can interact directly with the corrosion-inhibiting agent, or it can first react with at least one constituent of the planarizing liquid to form an altered chemical agent which then reacts with the corrosion-inhibiting agent. In still another aspect of the invention, the selected chemical agent can control a rate and/or manner of material removal from the microelectronic substrate after reacting with a constituent of the planarizing liquid to form a second chemical agent. For example, the second chemical agent can restrict an amount of a corrosion-inhibiting agent chemically interacting with the planarizing pad, or the second chemical agent can include an etchant to accelerate a removal rate of material from the microelectronic substrate. The present invention is also directed toward a planarizing medium for planarizing a microelectronic substrate. In one aspect of the invention, the planarizing medium can include a planarizing pad having a planarizing surface configured to engage the microelectronic substrate, and a planarizing liquid adjacent to the planarizing pad. At least one of the planarizing pad the planarizing liquid includes a chemical agent selected to separate a passivating agent (such as a corrosion-inhibiting agent) from discrete elements of the planarizing medium and/or inhibit the corrosioninhibiting agent from attaching to the discrete elements during planarization of the microelectronic substrate. In one aspect of this invention, the chemical agent can be selected to react with a constituent of the planarizing liquid to form an altered chemical agent that restricts interaction between the corrosion-inhibiting agent and the planarizing pad. Alternatively, the altered chemical agent can be selected to control other aspects of material removal from the microelectronic substrate.BRIEF DESCRIPTION OF DRAWINGSFigure 1 is a partially schematic side elevational view of a planarizing apparatus in accordance with the prior art. Figure 2 is a schematic side elevational view partially illustrating a planarizing pad having embedded abrasive elements and an embedded reactive chemical agent in accordance with an embodiment of the invention. Figure 3 is a schematic side elevational view partially illustrating a planarizing pad supporting a planarizing liquid that includes a reactive chemical agent. Figure 4 is a partially schematic side elevational view of a polishing pad that supports a planarizing liquid on a CMP machine in accordance with another embodiment of the invention.DETAILED DESCRIPTIONThe present disclosure describes planarizing media and methods for using planarizing media for chemical and/or chemicalmechanical planarizing of substrates and substrate assemblies used in the fabrication of microelectronic devices. Many specific details of certain embodiments of the invention are set forth in the following description and in Figures 2-4 to provide a thorough understanding of these embodiments.One skilled in the art, however, will understand that the present invention may have additional embodiments, or that the invention may be practiced without several of the details described below. Figure 2 is a schematic side elevational view illustrating a portion of a CMP machine 110 having a planarizing medium 150 in accordance with an embodiment of the invention. The planarizing medium 150 can include a planarizing pad 140 and a planarizing liquid 160 disposed on the planarizing pad 140. The planarizing machine 110 includes a support table 114 and a top-panel 116 that support the planarizing medium 150 in a manner generally similar to that discussed above with reference to Figure 1.The planarizing machine 110 further includes a substrate holder 132 that supports a microelectronic substrate 112, also in a manner generally similar to that discussed above with reference to Figure 1. As used herein, the term microelectronic substrate refers to a microelectronic substrate material with or without an assembly of microelectronic devices or features. In one embodiment, the planarizing liquid 160 is dispensed onto the planarizing pad 140 from a port 133 in the substrate holder 132.Alternatively, the planarizing liquid 160 can be directed to the planarizing pad 140 from other sources, such as a conduit (not shown) positioned near the planarizing pad 140. In either embodiment, the planarizing liquid 160 can include one or more chemicals that control the removal rate and manner that material is removed from the microelectronic substrate 112 during planarization. For example, the planarizing liquid 160 can include an etchant for etching the microelectronic substrate 112 and/or a passivating agent, such as a corrosion-inhibiting agent to prevent or restrict corrosion or etching during selected phases of the planarization process. In one aspect of this embodiment, the microelectronic substrate 112 can include a copper layer or copper components, and the planarizing liquid 160 can include benzoltriazole to inhibit etching of the copper at selected phases of the CMP process. Alternatively, the planarizing liquid 160 and/or the planarizing pad 140 can include other chemicals that inhibit chemical interaction between the planarizing medium 150 and the microelectronic substrate 112. The planarizing pad 140 can include a pad body 141 and a backing layer 142 that supports the pad body 141. The pad body 141 can include polycarbonates, resins, acrylics, polymers (such as polyurethane) or other suitable materials. In one embodiment, a plurality of abrasive elements 143 are distributed in the planarizing pad body 141 proximate to a planarizing surface 144 of the planarizing pad 140. As the planarizing pad 140 wears down during planarization, new abrasive elements 143 are exposed at the planarizing surface 144 to maintain or control the abrasive characteristics of the planarizing pad 140 throughout the planarization process. The abrasive elements 143 can include alumina, ceria, titania or other suitable abrasive materials that mechanically and/or chemicallymechanically remove material from the microelectronic substrate 112. During planarization, the performance of the abrasive elements 143 can be impaired by the chemicals in the planarizing solution. For example, benzoltriazole or other inhibiting agents can attach to the surfaces of the abrasive elements 143 and reduce the chemical and/or mechanical interactions between the abrasive elements 143 and the microelectronic substrate 112. Accordingly, in one embodiment, the planarizing medium 150 includes a chemical agent 146 that reduces or eliminates the effect of inhibiting agents on the abrasive elements 143. In one aspect of this embodiment, the chemical agent 146 is embedded in the planarizing pad body 141 and is released into the planarizing liquid 160 as the planarizing pad 140 wears down. In another aspect of this embodiment, the chemical agent 146 is selected to undergo a chemical reaction with the planarizing liquid 160 or a constituent of the planarizing liquid 160 to form an altered chemical agent. The altered chemical agent then slows or halts the extent to which the inhibiting agent restricts the chemical and/or mechanical interaction between the abrasive elements 143 and the microelectronic substrate 112. For example, the chemical agent 146 can be selected to form a solvent or etchant that removes the inhibiting agent from the abrasive elements 143 and/or prevents the inhibiting agent from attaching, coupling and/or chemically interacting with the abrasive elements 143. In one embodiment, the chemical agent 146 can include phosphorus, chlorine, nitrogen, sulfur or compounds that include these elements. Accordingly, the chemical agent can form an altered chemical agent that includes phosphoric acid, hydrochloric acid, nitric acid, or sulfuric acid, respectively, upon chemically reacting with the planarizing liquid 160.Alternatively, the chemical agent 146 can include other compounds or elements that react what the planarizing liquid 160 to form other chemicals that restrict or prevent interaction between the abrasive elements 143 and inhibiting agents. In one aspect of the foregoing embodiments, the altered chemical agent can dissolve the inhibiting agent. Alternatively, the altered chemical agent can react with the inhibiting agent to form a compound that is more soluble in the planarizing liquid 160 than is the inhibiting agent alone. Accordingly, the inhibiting agent will be more likely to dissolve in the planarizing liquid 160. In another alternate embodiment, the altered chemical agent can break down the inhibiting agent into constituents that are more soluble in the planarizing liquid 160 than is the inhibiting agent alone.In still further embodiments, the altered chemical agent can undergo other reactions or interactions with the inhibiting agent that at least restrict the chemical interaction between the inhibiting agent and the abrasive elements 143. In another embodiment, the chemical agent 146 can react directly with the inhibiting agent without first reacting with the planarizing liquid 160. For example, the chemical agent 146 can include solvents, such as the acidic compounds discussed above, or other suitable compounds that dissolve the inhibiting agent or otherwise limit the ability of the inhibiting agent to impair the effectiveness of the abrasive elements 143. Whether the chemical agent 146 reacts directly with the inhibiting agent or first reacts with the planarizing liquid 160 to form an altered chemical agent that reacts with the inhibiting agent, the chemical agent 146 can be embedded in the planarizing pad body 141. In one embodiment, solid granules of the chemical agent 146 are dispersed in a liquid or soft planarizing pad material, and then the planarizing pad material is cured to solidify around the chemical agent 146 and form discrete cells 145 around the chemical agent 146. For example, the chemical agent 146 can be distributed in the planarizing pad body 141 in a manner generally similar to that with which the abrasive elements 143 are distributed.Alternatively, the discrete cells 145 can be pre-formed in the planarizing pad body 141 and then filled with the chemical agent 146. In this alternate embodiment, the chemical agent 146 can be in a liquid, gel, or solid phase.In either of the above methods for distributing the chemical agent 146 in the planarizing body 141, the size, shape and distribution of the cells 145 within the planarizing pad body 141 can be selected to reduce the impact of the chemical agent 146 on the abrasive characteristics of the planarizing pad body. For example, the cells 145 can be small and uniformly distributed in the planarizing pad body 141 so as not to interfere with the distribution and/or operation of the abrasive elements 143. In one aspect of this embodiment, the cells 145 are randomly distributed and are from about 50% to about 100% the size of the abrasive elements 143. Alternatively, the cells 145 can be larger or smaller, so long as they do not interfere with the abrasive elements 143. The cells 145 can have a generally elliptical shape in one embodiment and can have other shapes in other embodiments. In an embodiment in accordance with another aspect of the invention, the pH of the planarizing liquid 160 can be controlled to selected levels that are believed to reduce the chemical interaction between the inhibiting agent and the abrasive elements 143. For example, in one aspect of this embodiment, the abrasive elements 143 have a first zeta potential and the microelectronic substrate 112 includes a metal or other constituent having a second zeta potential. As used herein, the zeta potential refers to the potential of a surface in a particular planarizing medium. For example, when the planarizing liquid 160 includes an inhibiting agent, the agent typically includes negatively charged ions. Accordingly, the pH of the planarizing fluid 160 can be selected so that the abrasive elements 143 have a zeta potential similar (i. e., of the same polarity) to that of the inhibiting agent so that they repel. This can prevent chemical interaction between the inhibiting agent and the planarizing pad 140. In one aspect of this embodiment, for example, when the abrasive elements 143 include alumina and the microelectronic substrate 112 includes copper, the planarizing liquid 160 has a pH from about 6 to about 10. In a particular aspect of this embodiment, the planarizing liquid 160 has a pH of about 7 and in other embodiments, the planarizing liquid has a pH of other values. One feature of an embodiment of the planarizing medium 150 discussed above with reference to Figure 2 is that the planarizing pad 140 includes a chemical agent 146 that at least limits the chemical interaction between the inhibiting agent in the planarizing liquid 160 and the abrasive elements 143 in the planarizing pad 140. The chemical agent 146 may also limit, to a lesser degree, the interaction between the inhibiting agent and the microelectronic substrate 112, but the primary effect of the chemical agent 146 is generally to limit the chemical interaction between the inhibiting agent and the abrasive elements 143. An advantage of this feature is that the surfaces of the abrasive elements 143 can remain chemically active to planarize the microelectronic substrate 112. This is unlike some conventional techniques for which the inhibiting agent can restrict the effectiveness of the abrasive elements 143. Another advantage of an embodiment of the planarizing medium 150 is that the chemical agent 146 remains embedded in the planarizing pad 140 until the planarizing pad 140 wears down sufficiently to release the chemical agent 146. Accordingly, the amount of the chemical agent 146 released into the planarizing liquid 160 can be controlled by controlling the concentration and the distribution of the chemical agent 146 in the planarizing pad 140 and the rate with which the planarizing pad 140 abrades during planarization. In another embodiment, the chemical agent 146 (released as the planarizing pad 140 abrades during planarization) interacts with the planarizing liquid 160 to form compounds that control other aspects of the planarizing process. For example, the chemical agent 146 can react with the planarizing liquid 160 to form a solvent or etchant that removes material from the microelectronic substrate 112. In one aspect of this embodiment, the chemical agent 146 can include nitrogen or a nitrogen compound (such as potassium nitrate) that forms nitric acid when exposed to the planarizing liquid 160. The nitric acid can directly etch copper or other metals from the microelectronic substrate 112, to increase the planarizing rate of the microelectronic substrate when the metals are exposed. In other embodiments, the chemical agent 146 can react with the planarizing liquid 160 to form other chemical compounds. For example, the chemical agent 146 can form a surfactant that increases the wetted surface area of the planarizing pad 140 and/or the microelectronic substrate 112 to increase the speed and uniformity of the planarizing process. In still further embodiments, the chemical agent 146 can form other chemical elements or compounds that control the rate and/or the manner of material removal from the microelectronic substrate 112. Figure 3 is a schematic side elevational view partially illustrating the planarizing machine 110 discussed above with reference toFigure 2 supporting a planarizing medium 250 that includes a planarizing pad 240 and a planarizing liquid 260 in accordance with another embodiment of the invention. The planarizing pad 240 can include a backing layer 242 that supports a planarizing pad body 241 having a plurality of abrasive elements 243. The planarizing pad 240 does not include an embedded chemical agent; instead, a chemical agent 246 is disposed directly in the planarizing liquid 260. In one aspect of this embodiment, the chemical agent 246 directly restricts chemical interactions between the inhibiting agent and the abrasive particles 243 without first undergoing a chemical reaction with the planarizing liquid 260. Alternatively, the chemical agent 246 can first react with the planarizing liquid 260 to form an altered chemical agent that restricts interactions between the inhibiting agent and the abrasive particles 243, in a manner generally similar to that discussed above with reference toFigure 2. One feature of an embodiment of the planarizing medium 250 discussed above with reference to Figure 3 is that the chemical agent 246 can be disposed directly in the planarizing liquid 260. Accordingly, the amount of chemical agent 246 in contact with the planarizing pad 240 and the microelectronic substrate 112 can be controlled by controlling the amount of chemical agent 246 mixed in the planarizing liquid 260. An advantage of this feature is that the amount of the chemical agent interacting with the polishing pad 140 can be controlled independently from the characteristics of the planarizing pad 140. Figure 4 is a partially schematic cross-sectional view of a rotary planarizing machine 310 with a generally circular platen or table 320, a carrier assembly 330, a planarizing pad 340 positioned on the table 320, and a planarizing liquid 360 on the planarizing pad 340. The compositions of planarizing pad 340 and the planarizing liquid 360 can be generally similar to the compositions of planarizing pads and planarizing liquids discussed above with reference to Figures 2 and 3. Alternatively, the planarizing liquid 360 can be a slurry having a suspension of abrasive elements, and the planarizing pad 340 can have no abrasive elements. The planarizing machine 310 may also have an under-pad 325 attached to an upper surface 322 of the platen 320 for supporting the planarizing pad 340.A drive assembly 326 rotates (arrow"F") and/or reciprocates (arrow"G") the platen 320 to move the planarizing pad 340 during planarization. The carrier assembly 330 controls and protects the microelectronic substrate 112 during planarization. The carrier assembly 330 typically has a substrate holder 332 with a pad 334 that holds the microelectronic substrate 112 via suction. A drive assembly 336 of the carrier assembly 330 typically rotates and/or translates the substrate holder 332 (arrows"H"and"l,"respectively). Alternatively, the substrate holder 332 may include a weighted, free-floating disk (not shown) that slides over the planarizing pad 340. To planarize the microelectronic substrate 112 with the planarizing machine 310, the carrier assembly 330 presses the microelectronic substrate 112 against a planarizing surface 342 of the planarizing pad 340. The platen 320 and/or the substrate holder 332 then move relative to one another to translate the microelectronic substrate 112 across the planarizing surface 342. As a result, the abrasive particles in the planarizing pad 340 and/or the chemicals in the planarizing liquid 360 remove material from the surface of the microelectronic substrate 112. From the foregoing, it will be appreciated that, although specific embodiments of the invention have been described herein for purposes of illustration, various modifications may be made without deviating from the spirit and scope of the invention. Accordingly, the invention is not limited except as by the appended claims. |
Disclosed herein is a generational thread scheduler. One embodiment may be used with processor multithreading logic to execute threads of executable instructions, and a shared resource to be allocated fairly among the threads of executable instructions contending for access to the shared resource. Generational thread scheduling logic may allocate the shared resource efficiently and fairly by granting a first requesting thread access to the shared resource allocating a reservation for the shared resource to each other requesting thread of the executing threads and then blocking the first thread from re-requesting the shared resource until every other thread that has been allocated a reservation, has been granted access to the shared resource. Generation tracking state may be cleared when each requesting thread of the generation that was allocated a reservation has had their request satisfied. |
CLAIMS What is claimed is: 1. A method for sharing a resource in a multiprocessing system, the method comprising: receiving, from a first plurality of requesting entities in a processor, requests for a shared resource; granting a first entity of the first plurality of requesting entities access to the shared resource; allocating a reservation to other entities of the first plurality of requesting entities for the shared resource; and blocking the first entity from re-requesting the shared resource at least until each entity of the first plurality of requesting entities has been granted access to the shared resource. 2. The method of Claim 1 further comprising: granting a second entity of the first plurality of requesting entities access to the shared resource; and blocking the second entity from re-requesting the shared resource at least until each entity of the first plurality of requesting entities has been granted access to the shared resource. 3. The method of Claim 2 further comprising: allocating a reservation to each entity of a second plurality of requesting entities for the shared resource; and blocking the first and second entity from re-requesting the shared resource at least until each entity of the second plurality of requesting entities has been granted access to the shared resource. 4. An article of manufacture comprising: a machine-accessible medium including data and instructions for allocating a shared resource among a plurality of entities such that, when accessed by a machine, cause the machine to: grant a first requesting entity of the plurality of entities access to the shared resource; allocate a reservation for the shared resource to each requesting entity of the first plurality of entities; and block the first entity from re-requesting the shared resource at least until noentity of the plurality of entities has been allocated a reservation but has not yet been granted access to the shared resource. 5. The article of manufacture of Claim 4, said machine-accessible medium including data and instructions that, when accessed by a machine, cause the machine to: grant a second requesting entity of the plurality of entities access to the shared resource; and block the first and second entities from re-requesting the shared resource at least until there are none of the plurality of entities that, after being allocated a reservation, were not then granted access to the shared resource. 6. The article of manufacture of Claim 5, said machine-accessible medium including data and instructions that, when accessed by a machine, cause the machine to: allocate a reservation for the shared resource to each requesting entity of the plurality of entities that has not already been granted access to the shared resource; and clear a first state variable when each entity of the plurality of entities that was allocated a reservation has had their request satisfied. 7. A processor comprising: multithreading logic to execute a plurality of threads of executable instructions; a shared resource to be allocated fairly among threads of the plurality of threads of executable instructions contending for access to the shared resource; a thread scheduling logic to allocate the shared resource among the plurality of threads of executable instructions by: granting a first requesting thread of the plurality of threads of executable instructions access to the shared resource; allocating a reservation for the shared resource to requesting threads of the first plurality of threads of executable instructions; and blocking the first thread from re-requesting the shared resource at least until every thread of the plurality of threads of executable instructions that has been allocated a reservation, has had their request satisfied. 8. The processor of Claim 7, said thread scheduling logic to further allocate the shared resource among the plurality of threads of executable instructions by: granting a second requesting thread of the plurality of threads of executable instructions access to the shared resource; and blocking the first and second thread from re-requesting the shared resource at leastuntil every thread of the plurality of threads of executable instructions that has been allocated a reservation, has been granted access to the shared resource. 9. The processor of Claim 7, said thread scheduling logic to further allocate the shared resource among the plurality of threads of executable instructions by: blocking all threads from re-requesting the shared resource until every thread of the plurality of threads of executable instructions that has been allocated a reservation, has been granted access to the shared resource. 10. The processor of Claim 7, said thread scheduling logic to further allocate the shared resource among the plurality of threads of executable instructions by: allocating a reservation for the shared resource to each requesting thread of the plurality of threads of executable instructions that has not already been granted access to the shared resource; and clearing a first state variable for each thread of the plurality of threads of executable instructions that has been allocated a reservation if it has been granted access to the shared resource. 11. The processor of Claim 10, said thread scheduling logic to further allocate the shared resource among the plurality of threads of executable instructions by: maintaining the first state variable for each thread of the plurality of threads of executable instructions having an outstanding or completed request, until every thread that has been allocated a reservation, has been granted access to the shared resource. 12. A processor comprising: simultaneous multithreading logic to execute a plurality of threads of executable instructions; one or more cache memories to store a copy of one or more portions of data and/or executable instructions from an addressable memory, at least in part through the use of a shared resource; a finite-state machine for allocating the shared resource among the plurality of threads of executable instructions, said finite-state machine to: grant a first requesting thread of the plurality of threads of executable instructions access to the shared resource; allocate a reservation for the shared resource to requesting threads of the first plurality of threads of executable instructions; and block the first thread from re -requesting the shared resource at least untilno thread of the plurality of threads of executable instructions has been allocated a reservation but has not been granted access to the shared resource. 13. The processor of Claim 12, said finite-state machine to: block all threads from re-requesting the shared resource until every thread of the plurality of threads of executable instructions that has been allocated a reservation, has also been granted access to the shared resource. 14. The processor of Claim 12, said finite-state machine to: allocate a reservation for the shared resource to each requesting thread that has not already been granted access to the shared resource; and clear a first state variable for each thread that has been allocated a reservation if it has been granted access to the shared resource. 15. The processor of Claim 14, said finite-state machine to: maintain the first state variable for each thread having an outstanding or completed request, until every thread that has been allocated a reservation, has been granted access to the shared resource. 16. A computing system comprising: an addressable memory to store data and also to store executable instructions; one or more cache memories to store a copy of one or more portions of the data and/or the executable instructions stored in the addressable memory, at least in part through the use of a shared resource; a multiprocessor including simultaneous multithreading logic to execute a plurality of threads of executable instructions, the multiprocessor operatively coupled with the addressable memory and including a finite-state machine for allocating the shared resource among the plurality of threads of executable instructions, said finite-state machine to: grant a first requesting thread of the plurality of threads of executable instructions access to the shared resource; allocate a reservation for the shared resource to requesting threads of the first plurality of threads of executable instructions; and block the first thread from re -requesting the shared resource at least until no thread of the plurality of threads of executable instructions has been allocated a reservation but has not yet been granted access to the shared resource. 17. The computing system of Claim 16, said finite-state machine to: allocate a reservation for the shared resource to each requesting thread that has notalready been granted access to the shared resource; and clear a first state variable for each thread that has been allocated a reservation if it has been granted access to the shared resource. 18. The computing system of Claim 17, said finite-state machine to: maintain the first state variable for each thread having an outstanding or completed request, until every thread that has been allocated a reservation, has been granted access to the shared resource. 19. The computing system of Claim 16, finite-state machine to: grant a second requesting thread of the plurality of threads of executable instructions access to the shared resource; and block the first and second thread from re-requesting the shared resource at least until every thread of the plurality of threads of executable instructions that has been allocated a reservation, has been granted access to the shared resource. 20. The computing system of Claim 19, said finite-state machine to: block all threads from re-requesting the shared resource until every thread of the plurality of threads of executable instructions that has been allocated a reservation, has also been granted access to the shared resource. |
GENERATIONAL THREAD SCHEDULER FIELD OF THE DISCLOSURE This disclosure relates generally to the field of microprocessors. In particular, the disclosure relates to a scheduler for efficiently and fairly scheduling shared resources among threads of instructions in a multithreaded processor. BACKGROUND OF THE DISCLOSURE In multiprocessing, processors may employ multithreading logic to execute a plurality of threads of executable instructions. These threads of executable instructions may also share processor execution resources such as, for example, a page miss handler, or a hardware page walker, or a cache fill buffer, or some other execution resource. A thread picker may choose one of several threads from which to issue instructions for execution. The thread picker may use a nominally round-robin algorithm so that all threads have equal access to the execution hardware. In some cases the thread picker may deviate from round-robin if the resources needed by a thread are temporarily unavailable. The thread picker may attempt to maintain fairness of resource allocation by dynamically computing resource thresholds for competing threads and filtering out those threads that have exceeded their resource thresholds. This may require the thread picker to store and maintain additional state information, for example thresholds, for shared resources and threads regardless of their actual shared resource use. Some processor execution resources may require multiple clocks to service a request. For example, a hardware page walker may need tens of clock cycles to walk the page tables. This may give rise to a problem, in that once one thread has successfully sent a request to the shared resource, and the resource becomes busy, other threads that subsequently request access to the resource will be denied until the resource becomes available. If no provisions are made to ensure fairness, it is possible that the resource may be acquired again and again by the same thread, or alternatively by some subset of all of the threads. Consequently, this may permit a condition whereby a small number of threads hog a resource for long periods of time. Eventually, a live-lock detector may elevate priority levels to prevent a thread from experiencing complete starvation, but such techniques do not suffice to prevent an unfair allocation of processor execution resources from reoccurring. To date, efficient logic and structures for fairly scheduling shared resources among contending threads of instructions in multithreaded processors have not been fully explored.BRIEF DESCRIPTION OF THE DRAWINGS The present invention is illustrated by way of example and not limitation in the figures of the accompanying drawings. Figure 1 illustrates one embodiment of a multithreaded processor using a mechanism for efficiently and fairly scheduling shared resources among multiple threads of instructions. Figure 2 illustrates another embodiment of a multithreaded processor using a mechanism for efficiently and fairly scheduling shared resources among multiple threads of instructions. Figure 3 illustrates one embodiment of a multithreaded processing system using a mechanism for efficiently and fairly scheduling shared resources among threads of instructions in a multithreaded processor. Figure 4 illustrates one embodiment of a mechanism for efficiently and fairly scheduling shared resources among multiple threads of instructions. Figure 5 illustrates one embodiment of a state machine for a mechanism to efficiently and fairly schedule shared resources among multiple threads of instructions. Figure 6 illustrates a flow diagram for one embodiment of a process to efficiently and fairly scheduled shared resources among threads of instructions in a multithreaded processor. Figure 7 illustrates a flow diagram for an alternative embodiment of a process to efficiently and fairly scheduled shared resources among threads of instructions in a multithreaded processor. DETAILED DESCRIPTION Methods and apparatus for a generational thread scheduler are disclosed herein. One embodiment may be used with processor multithreading logic to execute threads of executable instructions, and to allocate a shared resource fairly among the threads of executable instructions contending for access to the shared resource. Generational thread scheduling logic can allocate the shared resource efficiently and fairly by granting a first requesting thread access to the shared resource and allocating a reservation for the shared resource to each requesting thread of the executing threads. Generational thread scheduling logic then blocks threads from re-requesting the shared resource until every other thread that has been allocated a reservation, has also been granted access to the shared resource. Generation tracking state may be cleared when each requesting thread of the generation that was allocated a reservation has had access to the shared resource.Thus, a generational thread scheduler may allocate a shared processor execution resource fairly among requesting threads of executable instructions contending for access to the shared resource over each generation of requests. It will be appreciated that such a mechanism may avoid unbalanced degradation in performance for some threads due to unfair allocation of access to shared processor execution resources during periods of contention for those execution resources. It will be appreciated that while the description below typically refers to a shared resource being requested by threads of executable instructions, the invention is not so limited. The techniques herein described may be applicable to requesting hardware devices, or software processes, or firmware, or any other types of requesting entities alone or in combination. These and other embodiments of the present invention may be realized in accordance with the following teachings and it should be evident that various modifications and changes may be made in the following teachings without departing from the broader spirit and scope of the invention. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense and the invention measured only in terms of the claims and their equivalents. Figure 1 illustrates one embodiment of a multithreaded processor 105 using a mechanism for efficiently and fairly scheduling shared resources among multiple threads of instructions. One embodiment of multithreaded processor 105 includes an apparatus 101 that uses a shared page miss handler, PMH 110 and hardware page walker, HPW 116 for multiple multithreaded processing cores 102-104 and/or other devices to share virtual memory in a multi-core system. Apparatus 101 comprises translation look-aside buffer, TLB 112 to store second level cache (L2) virtual address translation entries. Page-miss handler, PMH 110, is coupled with the TLB 112 to facilitate page walks on page misses using HPW 116 and to populate virtual address translation entries of TLB 112. For some embodiments page-miss handler, PMH 110 and HPW 116 are indistinguishable, although for some first level cache (LI) page misses a page table walk may not be required. For the sake of illustration TLB 112, HPW 116 and PMH 110 are shown as being included in apparatus 101 but it will be appreciated that portions of one or all may be implemented as separate or distributed hardware and/or software data structures and may reside outside of apparatus 101, for example including in main memory. Apparatus 101 also comprises generational thread scheduler (GTS) 103, which is shown as being included in apparatus 101 but may be implemented as separate hardware or software and may reside outside of apparatus 101.Apparatus 101 is operatively coupled with bus/interconnect 115 for communicating with a multi-core processor or multi-core processing system having multiple multithreaded processor cores or other processing devices, for sharing virtual memory in the multi-core system. The system may include multiple multithreaded processor cores, two of which are shown as core 102 and core 104, as well as other processing devices such as graphics devices, two of which are shown as GFX 106 and GFX 108, and optionally other processing devices such as video device 107 and device 109. The multiple processor cores 102 and 104 may be multithreaded cores processing multiple process threads for execution via decode 131 and decode 151, per-thread queues 133 and 153, floating point/single-instruction multiple-data registers FP/SIMD REGS 135a and FP/SIMD REGS 155a, general registers GEN REGS 135b and GEN REGS 155b, floating point/single-instruction multiple-data execution units FP/SIMD EXU 137a and FP/SIMD EXU 157a, and integer execution units INT EXU 137b and INT EXU 157b, respectively. Core 102 and core 104 may also be coupled with external memory (not shown) via a bus/interconnect 1 15 and memory units MEM-U 125 and MEM-U 145 through bus/interconnect units B/I-U 120 and B/I-U 140, respectively. Core 102 and core 104 may also be coupled with graphics processing devices GFX 106 and GFX 108, and optionally other heterogeneous processing devices such as video device 107 and device 109 via external memory and bus/interconnect 115, and optionally via a last level cache (not shown). These multiple processing cores or other processing devices may also share virtual memory address spaces via external physical memory and optionally through a last level cache (not shown). Typically, the processor cores 102 and 104 may have cache hierachies, e.g. I-cache 123, D-cache 124, L2 126 and I-cache 143, D-cache 144, L2 146, respectively; and TLBs, e.g. I-TLB 121, D-TLB 122 and I-TLB 141, D-TLB 142, respectively to cache virtual to physical address translations from the system page tables in a paged virtual memory system. The graphics processors, GFX 106 and GFX 108, and optionally other processing devices such as video device 107 and device 109 may also have mechanisms such as TLBs, e.g. TLB 162, TLB 182, TLB 172 and TLB 192, respectively, for performing virtual to physical address translations. Various embodiments of TLB 162, TLB 182, TLB 172 and TLB 192, respectively, may or may not have the same capabilities, or capabilities comparable to homogeneous processor cores 102 and 104. The graphics processingdevices GFX 106, GFX 108, and optionally video device 107 and device 109 may also have caches, e.g. cache 164, cache 184, cache 174 and cache 194, respectively. If one or more threads of processor cores 102 and 104, graphics processing devices GFX 106, GFX 108, and optionally video device 107 and device 109, while accessing their TLBs via a TLB lookup, generate a page miss, then they may send a page miss requests to shared PMH 110 of apparatus 101. Apparatus 101 may receive one or more page miss requests, e.g. in a page miss request queue, from one or more respective requesting threads on devices of a plurality of devices, processor cores 102 and 104, graphics processing devices GFX 106, GFX 108, and optionally video device 107 and device 109, in the multi-core system. When processing a page miss request from one of the requesting devices, apparatus 101 may include generational thread scheduler 103 in order to arbitrate and identify which page miss request of the one or more requesting threads to process. In some embodiments, generational thread scheduler 103 may be used with processor cores 102 and 104 multithreading logic, and per-thread queues 133 and 153, to pick threads for execution and to allocate a shared resource fairly, such as a shared PMH 110 and HPW 116 of apparatus 101, among the threads contending for access to the shared resource. Generational thread scheduler 103 can allocate the shared resource efficiently and fairly by granting a first requesting thread access to the shared resource and allocating a reservation for the shared resource to each requesting thread. Generational thread scheduler 103 then blocks the threads from re-requesting the shared resource until every other thread that has been allocated a reservation, has also been granted access to the shared resource. Generation tracking state can be cleared by generational thread scheduler 103 when each requesting thread of the generation that was allocated a reservation has had access to the shared resource. In some embodiments, generational thread scheduler 103 may allocate access to shared PMH 110 separately from access to shared HPW 116. Apparatus 101 may perform a second local TLB 112 lookup to satisfy the page miss request, and then upon a page miss in TLB 112, generational thread scheduler 103 may allocate access or a reservation to shared HPW 116 to perform a page table walk to generate a physical address responsive to the first page miss request. Upon completion either by shared PMH 110 with or without use of shared HPW 116 the physical address may be sent by communication logic of apparatus 101 to the device of the corresponding requesting thread, or a fault may besignaled by apparatus 101 to an operating system for the corresponding requesting thread responsive to the page miss request. It will be appreciated that whenever duplicate page miss requests are received by apparatus 101, if any duplicate request has been, or are being processed by PMH 110, the other duplicate requests may be allocated a reservation for PMH 110 and wait to be satisfied along with the first request. Thus handling a duplication of requests from different threads may be performed by generational thread scheduler 103 for the shared PMH 110 and HPW 116 of apparatus 101 when virtual memory space is shared by more devices. Similarly, if the first request generates a page fault due to a page not being present in physical memory, duplicate page fault signals to the operating system for the same reason may be eliminated, while page faults for access rights violations may be preserved but without a duplication of the page walk using shared HPW 116. Figure 2 illustrates another embodiment of a multithreaded processor 205 using a mechanism for efficiently and fairly scheduling shared resources among multiple threads of instructions. One embodiment of processor 205 utilizes a shared page miss handler and/or a shared hardware page walker for threads executing on multiple processing cores or other devices to share virtual memory in a multi-core system. Apparatus 201 of processor 205 comprises TLB 212 to store virtual address translation entries. Page-miss handler, PMH 210, is coupled with the TLB 212 to facilitate page walks using shared hardware page walker, HPW 216, on page misses and to populate virtual address translation entries of TLB 212. For the sake of illustration TLB 212, HPW 216 and PMH 210 are shown as being included in apparatus 201 but it will be appreciated that portions of one or all may be implemented as a separate or a distributed hardware and/or software data structures and reside outside of apparatus 201, for example including in main memory. Apparatus 201 also comprises generational thread scheduler, GTS 203 and optionally comprises second level cache, L2 214, which are shown as being included in apparatus 201 but may be implemented as separate hardware and/or software and may reside outside of apparatus 201. Apparatus 201 is operatively coupled with busses/interconnects 215 and 251 for communicating with multi-core processor 205 or a multi-core processing system having multiple multithreaded processor cores and/or other processing devices, for sharing virtual memory, via memory control 252 through external memory (not shown) in the multi-core system. The system may include multiple multithreaded processor cores, two of which areshown as core 202 and core 204, as well as other processing devices such as graphics devices, two of which are shown as GFX 206 and GFX 208, and optionally other processing devices such as video device 207 and device 209. The multiple processor cores 202 and 204 may be multithreaded cores processing multiple process threads for execution as described, for example, with regard to Figure 1. Core 202 and core 204 may be coupled with various devices via a bus/interconnect 215, e.g. I/O expansion device 237, NAND control 257, transport processor 258, security processor 259, video display logic 227, audio/video I/O 248, audio decode logic 249, and optionally single instruction multiple data (SIMD) coprocessor 291. Core 202 and core 204 may also be coupled with external memory via a bus/interconnect 251 and memory control 252. Core 202 and core 204 may also be coupled with graphics processing devices GFX 206 and GFX 208, and optionally other processing devices such as video device 207 and device 209 via extenal memory and bus/interconnects 215 and 251 and optionally via a last level cache (not shown). These multiple processing cores or other processing devices may share virtual memory address spaces via an external main memory and optionally through last level cache (not shown). Typically, the processor cores may have cache hierachies, and TLBs, e.g. TLB 222 and TLB 242, respectively to cache virtual to physical address translations from the system page tables in a paged virtual memory system. The graphics processing devices, GFX 206 and GFX 208, and optionally other processing devices such as video device 207 and device 209 may also have mechanisms such as TLBs, e.g. TLB 262, TLB 282, TLB 272 and TLB 292, respectively, for performing virtual to physical address translations. Various embodiments of TLB 262, TLB 282, TLB 272 and TLB 292, respectively, may or may not have the same capabilities, or capabilities comparable to processor cores 202 and 204. If one or more of processor cores 202 and 204, graphics processing devices GFX 206, GFX 208, and optionally video device 207 and device 209, while accessing their TLBs via a TLB lookup, generate a page miss, then they may send a page miss requests to the shared PMH 210 of apparatus 201. Apparatus 201 may receive one or more page miss requests from one or more respective requesting devices of the plurality of devices, processor cores 202 and 204, graphics processing devices GFX 206, GFX 208, and optionally video device 207 and device 209, in the multi-core system by any suitable means, e.g. such as a request queue.When processing a page miss request from one of the requesting devices, apparatus 201 may include generational thread scheduler, GTS 203, in order to arbitrate and identify which page miss request of the one or more requesting threads to process. In some embodiments, GTS 203 may be used with processor cores 202 and 204 multithreading picker logic to pick threads for execution and to allocate a shared resource fairly, such as a shared PMH 210 and/or HPW 216 of apparatus 201, among the threads contending for access to the shared resource. Generational thread scheduler, GTS 203, can allocate the shared resource efficiently and fairly by granting a first requesting thread access to the shared resource and allocating a reservation for the shared resource to each requesting thread. Generational thread scheduler, GTS 203, then blocks the threads from re-requesting the shared resource until every other thread that has been allocated a reservation, has also been granted access to the shared resource. Generation tracking state can be cleared by GTS 203 when each requesting thread of the generation that was allocated a reservation has had access to the shared resource. In some embodiments, portions of PMH 210 may be distributed and/or included in processor cores 202 and 204, or thread scheduler 203 may allocate access to a shared PMH 210 separately from access to a shared HPW 216. Apparatus 201 may perform a second local TLB 212 lookup to satisfy the page miss request, and then upon a page miss in TLB 212, GTS 203 may allocate access or a reservation to the shared HPW 216 to perform a page table walk and generate a physical address responsive to the first page miss request. Upon completion either by shared PMH 210 or by shared HPW 216 the physical address may be sent by communication logic of apparatus 201 to the device of the corresponding requesting thread, or a fault may be signaled by apparatus 201 to an operating system for the corresponding requesting thread responsive to the first page miss request. It will be appreciated that whenever duplicate page miss requests are received by apparatus 201, if any duplicate request has been, or is being processed by PMH 210, the other duplicate requests may be allocated a reservation for PMH 210 and wait to be satisfied along with the first request. Thus handling a duplication of requests from different threads may be performed by GTS 203 for the shared PMH 210 and HPW 216 of apparatus 201 when virtual memory space is shared by more devices. Similarly, if the first request generates a page fault due to a page not being present in physical memory, duplicate page fault signals to the operating system for the same reason may beeliminated, while page faults for access rights violations may be preserved but without a duplication of the page walk using HPW 216. Figure 3 illustrates one embodiment of a multithreaded processing system using a mechanism for efficiently and fairly scheduling shared resources among threads of instructions in a multithreaded processor. System 300 includes apparatus 301 of processor 305, which comprises TLB 312 to store virtual address translation entries. Page-miss handler, PMH 310, is coupled with the TLB 312 to facilitate page walks on page misses and to populate virtual address translation entries of TLB 312. For the sake of illustration TLB 312, HPW 316 and PMH 310 are shown as being included in apparatus 301 but it will be appreciated that portions of one or all may be implemented as separate or distributed hardware and/or software data structures and reside outside of apparatus 301, for example including in main memory 355. Apparatus 301 also comprises GTS 303 and optionally comprises second level cache, L2 314, which are shown as being included in apparatus 301 but may be implemented as separate hardware or software and may reside outside of apparatus 301. Apparatus 301 is operatively coupled with busses/interconnects 315 and 351 for communicating with multi-core processor 305 or a multi-core processing system having multiple processor cores or other processing devices, for sharing virtual memory, via memory control 352 through external memory 355, in the multi-core system. Embodiments of system 300 may be implemented using standard or non-standard or proprietary technologies, interfaces, busses or interconnects 315 and 351 such as the (Peripheral Component Interconnect) PCI or PCI Express or (Serial Advanced Technology Attachment) SATA for communicating with a multi-core processor or multi-core processing system. Other embodiments of system 300 may be implemented using standard or nonstandard or proprietary technologies, interfaces, busses or interconnects—for example, the SPI (Serial Peripheral Interface) bus; the ISA (Industry Standard Architecture) bus, PC/104, PC/104+ and Extended ISA; USB (Universal Serial Bus) AVC (Audio Video Class); AMBA (Advanced Microcontroller Bus Architecture) (Advanced Peripheral Bus) APB; Fire Wire {IEEE Std 1394a-2000 High Performance Serial Bus— Amendment I, ISBN 0-7381-1958-X; IEEE Std 1394b-2002 High Performance Serial Bus— Amendment 2, ISBN 0-7381-3253-5; IEEE Std 1394c-2006, 2007-06-08, ISBN 0-7381-5237-4); HDMI (High-Definition Multimedia Interface); the VESA's (Video Electronic StandardsAssociation) DisplayPort and Mini DisplayPort; the MIPI® (Mobile Industry Processor Interface) Alliance's SLIMbus® (Serial Low-power Inter-chip Media Bus), LLI (Low Latency Interface), CSI (Camera Serial Interface) DSI (Display Serial Interface), etc. System 300 may include multiple processor cores, two of which are shown as core 302 and core 304, as well as other processing devices such as graphics devices, two of which are shown as GFX 306 and GFX 308, and optionally other processing devices such as video device 307 and device 309. The multiple processor cores 302 and 304 may be multithreaded cores processing multiple process threads for execution. Processor core 302 and core 304 may be coupled with various devices via a bus/interconnect 315, e.g. bridge 330, wireless connectivity device 320, modem device 326, and audio I/O devices 328. Some embodiments of system 300 may be implemented as a system on a chip, for example, to use in a tablet computer or a smart phone. In such embodiments wireless connectivity device 320 may provide a wireless LAN (local area network) link, modem device 326 may provide a 4G (fourth generation), or other telephone link, and audio I/O devices 328 may provide a set of audio human interface devices, for example, a headset, speakers, handset microphone, audio input and output channels, and amplifiers. Processor cores 302 and 304 are coupled with bus/interconnect 315 for communicating with various other system devices, which may include but are not limited to wireless connectivity device 320, modem device 326, and audio I/O devices 328, camera interface 321, Fast IrDA (Infrared Data Association) port 323, HD (high definition) multimedia interface 324, USB 325, display control 327, and alternate master interface 329. Processor cores 302 and 304 are also coupled with bus/interconnect 315, bridge 330 and bus/interconnect 311 for communicating with various other system devices, which may include but are not limited to flash memory 313, SD (secure digital) memory 316, MMC (multimedia card) 317 and SSD (solid state drive) 319. Processor cores 302 and 304 are coupled with bus/interconnect 315, bridge 330 and bus/interconnect 318 for communicating with various other system devices, which may include but are not limited to UART (universal asynchronous receiver/transmitter) 331, camera control 332, Blue Tooth UART 333 optionally including a Wi-Fi 802.11 a/b/g transceiver and/or a GPS (Global Positioning System) transceiver, keypad 334, battery control 335, I/O expansion 337 and touch screen control 339. Processor core 302 and core 304 may also be coupled with memory 355 via a bus/interconnect 351 and memory control 352. Processor core 302 and core 304 may alsobe coupled with graphics processing devices GFX 306 and GFX 308, and optionally other processing devices such as video device 307 and device 309 via memory 355 and bus/interconnects 315 and 351 and optionally via last level cache (not shown). Memory 355 and other tangible storage media of system 300 may record functional descriptive material including executable instructions to implement a process to use a shared page miss handler PMH 310 or shared HPW 316 for multiple processing cores or other devices to share virtual memory in a multi-core system. Some embodiments of system 300 may adhere to industry standards which allow multiple operating systems running simultaneously within a single computer to natively share devices like Single Root I/O Virtualization (SRIOV), which provides native I/O virtualization in PCI Express topologies, or Multi-Root I/O Virtualization (MRIOV), which provides native I/O virtualization in topologies where multiple root complexes share a PCI Express hierarchy. Some embodiments of system 300 may include standard or non-standard or proprietary technologies, interfaces, busses or interconnects such as the SPI bus, USB, AMBA APB; FireWire, HDMI, Mini DisplayPort, MIPI SLIMbus, MIPI LLI, MIPI CSI, MIPI DSI, etc. These multiple processing cores or other processing devices may share virtual memory address spaces via memory 355 and optionally through last level cache (not shown). Typically, the processor cores may have cache hierachies, and TLBs, e.g. TLB 322 and TLB 342, respectively to cache virtual to physical address translations from a host or guest operating system page tables in a paged virtual memory system. The graphics processing devices, GFX 306 and GFX 308, and optionally other processing devices such as video device 307 and device 309 may also have mechanisms such as TLBs, e.g. TLB 362, TLB 382, TLB 372 and TLB 392, respectively, for performing virtual to physical address translations. Various embodiments of TLB 362, TLB 382, TLB 372 and TLB 392, respectively, may or may not have the same capabilities, or capabilities comparable to processor cores 302 and 304. If one or more of processor cores 302 and 304, graphics processing devices GFX 306, GFX 308, and optionally video device 307 and device 309, while accessing their TLBs via a TLB lookup, generate a page miss, then they may send a page miss requests to the shared PMH 310 of apparatus 301. Apparatus 301 may receive one or more page miss requests from one or more respective requesting devices of the plurality of devices, processor cores 302 and 304, graphics processing devices GFX 306, GFX 308, and optionally video device 307 and device 309, in the multi-core system.When processing a page miss request from one of the requesting devices, apparatus 301 may include generational thread scheduler, GTS 303 in order to arbitrate and identify which page miss request of the one or more requesting threads to process. In some embodiments, GTS 303 may be used with processor cores 302 and 304 multithreading picker logic to pick threads for execution and to allocate a shared resource fairly, such as a shared PMH 310 and/or HPW 316 of apparatus 301, among the threads contending for access to the shared resource. Generational thread scheduler, GTS 303 can allocate the shared resource efficiently and fairly by granting a first requesting thread access to the shared resource and allocating a reservation for the shared resource to each requesting thread. Generational thread scheduler, GTS 303 then blocks the first thread from re-requesting the shared resource until every other thread that has been allocated a reservation, has also been granted access to the shared resource. Generation tracking state can be cleared by generational thread scheduler, GTS 303 when each requesting thread of the generation that was allocated a reservation has had access to the shared resource. In some embodiments, portions of PMH 310 may be distributed and included in processor cores 302 and 304, or GTS 303 may allocate access to a shared PMH 310 separately from access to a shared HPW 316. Apparatus 301 may perform a second local TLB 312 lookup to satisfy the page miss request, and then upon a page miss in TLB 312, generational thread scheduler 303 may allocate access or a reservation to the shared HPW 316 to perform a page table walk and generate a physical address responsive to the first page miss request. Upon completion either by shared PMH 310 or by shared HPW 316 the physical address may be sent by communication logic of apparatus 301 to the device of the corresponding requesting thread, or a fault may be signaled by apparatus 301 to an operating system for the corresponding requesting thread responsive to the first page miss request. It will be appreciated that whenever duplicate page miss requests are received by apparatus 301, if any duplicate request has been, or is being processed by PMH 310, the other duplicate requests may be allocated a reservation for PMH 310 and wait to be satisfied along with the first request. Thus duplication of page walks may be eliminated when virtual memory space is shared by more devices. Similarly, if the first request generates a page fault, duplicate page fault signals to the operating system may also be eliminated. Figure 4 illustrates one embodiment of a mechanism 403 for efficiently and fairly scheduling shared resources among multiple threads of instructions.In one embodiment of a processor pipeline 400 a selection process occurs among multiple execution threads TO through Tn for simultaneous multithreading (SMT). Instruction storage 409 holds instructions of threads TO through Tn, which are fetched for execution by SMT instruction fetch logic 410 and queued into thread queues 411 through 412 of active or sleeping threads 422. Thread selection logic 413 may perform a selection process adapted to the resource requirements of threads TO through Tn to avoid inter-thread starvation, and improve efficiency and fairness of resource allocation by use of a generational thread scheduler 403 as is described in greater detail below. Thread selection logic 413 may also prioritize any remaining threads in order to select new instructions to be forwarded to allocation stage 414. In allocation stage 414 certain resources may be allocated to the instructions. In some embodiments, for example, registers may be renamed and allocated from the physical registers of register files in accordance with register alias table entries for each thread. In issue window 415 instructions of threads TO through Tn occupy entries and await issuance to their respective register files and execution units. In some embodiments, for example, integer instructions may be issued to receive operands, for example from GEN REGS 135b or 155b, for execution in an integer arithmetic/logical unit (ALU) for example 137b or 157b; floating point instructions may be issued to receive operands, for example from FP/SIMD REGS 135a or 155a, for execution in a floating point adder or floating point multiplier, etc. of FP/SIMD EXU 137a or 157a; and single instruction multiple data (SIMD) instructions may be issued to receive operands, for example from FP/SIMD REGS 135a or 155a, for execution in a SIMD ALU, SIMD shifter, etc. of FP/SIMD EXU 137a or 157a. After instructions are issued, they receive their operand registers from their respective register files, for example 135a, 155a, 135b or 155b, as they become available and then proceed to execution stage 419 where the are executed either in order or out of order to produce their respective results. In the case of memory operands, either a memory read, perhaps prior to execution stage 419, or a memory write, perhaps following execution stage 419, may be performed. If one or more instructions of threads TO through Tn, while accessing their TLBs via a TLB lookup, generate a page miss, then they may send a page miss requests to a shared page miss handler, for example PMH 110 of apparatus 101. Apparatus 101 may receive one or more page miss requests from one ormore respective requesting threads TO through Tn, for example of processor cores 102 and/or 104, in a multi-core system. When processing a page miss request from one of the requesting devices, apparatus 101 may include generational thread scheduler (GTS) 403 in order to arbitrate and identify which page miss request of the one or more requesting threads 423, Ti 431 to Tj 432, to process. In some embodiments, GTS 403 may be used with the processor core thread picker logic 413 to pick threads for execution and to allocate a shared resource (such as a shared PMH 110 and/or HPW 116 of apparatus 101) fairly among the threads contending for access to the shared resource. Generational thread scheduler, GTS 403 can allocate the shared resource efficiently and fairly by granting a first requesting thread access to the shared resource and allocating a reservation for the shared resource to each requesting thread. Generational thread scheduler, GTS 403 then blocks the threads from re-requesting the shared resource until every other thread that has been allocated a reservation, has also been granted access to the shared resource. Generation tracking state 434 can be cleared by thread scheduling logic 433 when each requesting thread of the generation that was allocated a reservation has had access to the shared resource. In embodiments that optionally execute instructions out of sequential order, retirement stage 420 may employ a reorder buffer 421 to retire the instructions of threads TO through Tn in their respective original sequential orders. In some embodiments a set of generational tracking states 434 (for example of threads 423) and thread picker 413 states (for example of threads 422) may be recorded and/or interpreted according to table 435 as follows for generational tracking states 434: IDLE for a reservation state R = 0, and a granted state G = 0; RESERVE for a reservation state R = 1, and a granted state G = 0; SERVICE for a reservation state R = 1 , and a granted state G = 1 ; BLOCK for a reservation state R = 0, and a granted state G = 1. For thread picker 413 states, a thread may have the SLEEP state: after it has made a request and been allocated a reservation (and not granted access to the shared resource), after it has been granted access and while its request is being serviced, and after it has been blocked from making a new request. A thread may have the ACTIVE state: whenever any request is completed (either the thread's own request or any other thread's request). In the ACTIVE state, the thread may generate a new request, or may repeat the same request if the request was previously not granted.Figure 5 illustrates one embodiment of a state machine 500 for a mechanism to efficiently and fairly schedule shared resources among multiple threads of instructions. For one embodiment a state machine 500 may be dynamically built, stored and maintained, for example by thread scheduling logic 433 of generational thread scheduler, GTS 403, for each outstanding and completed request for a shared resource during a request generation. For another embodiment separate sets of state machines 500 may be dynamically built, stored and maintained, for each instance of a shared resource during a request generation. For an alternative embodiment one collective state machine 500 may be dynamically built, stored and maintained, for all instances of a particular type of resources during a request generation. Beginning in state 540 a requesting thread is not using the shared resource. In one embodiment in state 540 of state machine 500 a reservation state R = 0, and a granted state G = 0. Upon a request being made by the thread to access the shared resource, a generational thread scheduler can allocate the shared resource efficiently and fairly by granting the requesting thread access to the shared resource wherein according to state transition 501, the requesting thread acquires the resource and moves to state 541, or by allocating a reservation for the shared resource to the requesting thread, wherein according to state transition 502, the requesting thread moves to state 542. For one embodiment, in state 542 the reservation state R may be set to one (1), and the granted state G may remain at zero (0). In state 542, the requesting thread has a reservation to use the shared resource and either the thread will eventually be granted access to the shared resource by the generational thread scheduler, wherein according to state transition 521, the requesting thread acquires the resource and moves to state 541, or the thread's request may be satisfied by another thread's duplicate request, wherein according to state transition 520, the requesting thread returns to state 540. For one embodiment, in state 541 both the reservation state R and the granted state G may be set to one (1) regardless of which state transition 501 or 521 resulted in the requesting thread acquiring the resource. Upon completion of the request from the thread by the shared resource, a generational thread scheduler can determine if every other thread that has been allocated a reservation, has also been granted access to the shared resource (i.e. when no other threads have outstanding reservations) wherein according to state transition 510, the requesting thread moves to state 540; or when one or more other threads have a reservation for the shared resource, then according to state transition 513, the thread moves to state 543 and isblocked from re-requesting the shared resource. For one embodiment, in state 543 the reservation state R may be reset to zero (0), and the granted state G may remain at one (1). For one embodiment of state machine 500, a generational thread scheduler can determine when every thread that has been allocated a reservation, has also been granted access to the shared resource by checking if any reservation state R is still set to one (1), in which case all threads in state 543 are blocked from re -requesting the shared resource. Upon completion of the requests from any other threads, their reservation states R may be reset to zero (0). Therefore, when no remaining reservation state R is set to one (1) the current generation of requests is completed, wherein according to state transition 530, the thread moves from state 543 to state 540. Figure 6 illustrates a flow diagram for one embodiment of a process 601 to efficiently and fairly scheduled shared resources among threads of instructions in a multithreaded processor. Process 601 and other processes herein disclosed are performed by processing blocks that may comprise dedicated hardware or software or firmware operation codes executable by general purpose machines or by special purpose machines or by a combination of both. In processing block 610 a reservation state R is initialized to store a value of zero (0). In processing block 615 a granted state G stores a value of zero (0). In processing block 620 a determination is made whether or not access to the shared resource is requested. If not processing returns to processing block 615. Otherwise processing proceeds to processing block 625 where a reservation state R is set to one (1) to signify that a corresponding requesting thread has a reservation for the shared resource. In processing block 630 the resource is checked to see if it is busy. If so the requesting thread waits at processing block 630 until the shared resource is available. When it is determined in processing block 630 that the shared resource is not busy processing proceeds to processing block 635 where a determination is made by generational thread scheduler whether the present request should be granted. If not, processing returns to processing block 630. Otherwise, the requesting thread is granted access to the shared resource and processing proceeds to processing block 640 where a granted state G is set to store a value of one (1). In processing block 645 the resource is checked to see if it has completed the present request. If not the requesting thread waits at processing block 645 until the request has been completed by the shared resource. Upon completion of the request from the current thread by the shared resource, processing proceeds to processing block 650 where a reservation state R is reset to store a value of zero (0). Then inprocessing block 650 a generational thread scheduler can determine when every thread that has been allocated a reservation, has also been granted access to the shared resource by checking if any reservation state R is still set to one (1), in which case the present threads is blocked from re -requesting the shared resource and waits at processing block 655. When it is determined in processing block 655 that no reservation state R is still set to one (1) processing proceeds to processing block 615 where the granted state G for the present thread is reset to store a value of zero (0). Thus generation tracking state is cleared by the generational thread scheduler when each requesting thread of the generation that was allocated a reservation has had access to the shared resource. It will be appreciated that embodiments of process 601 may execute processes of its processing blocks in a different order than the one illustrated or in parallel with other processing blocks when possible. For one embodiment a process 601 may be dynamically maintained, for example by thread scheduling logic 433 of generational thread scheduler, GTS 403, for each outstanding and completed request for a shared resource during a request generation. For another embodiment separate sets of processes 601 may be dynamically maintained, for each instance of a shared resource during a request generation. Figure 7 illustrates a flow diagram for an alternative embodiment of a process 701 to efficiently and fairly scheduled shared resources among threads of instructions in a multithreaded processor. In processing block 710 new thread requests for a shared resource are received by a generational thread scheduler. In processing block 720 reservations are allocated to the new requesting threads for the shared resource. In processing block 730 the resource is monitored to see if it is busy. If not a requesting thread is granted access to the shared resource in processing block 740 and processing proceed to processing block 750. Otherwise processing proceeds directly to processing block 750 where the resource is monitored to see if the current granted request is complete. If not processing continues in processing block 710. Otherwise when a request is completed in processing block 750, processing proceeds to processing block 760 where the granted thread's reservation is cleared. Processing then proceeds to processing block 770 where the generational thread scheduler determines if any thread that has been allocated a reservation, has not been granted access to the shared resource by checking if any reservations are still outstanding, in which case threads are blocked from re-requesting the shared resource in processing block 780 until every thread that has been allocated a reservation, has been granted access to the shared resource. Otherwise all requests for theshared resource are unblocked in processing block 790. Processing then continues in processing block 710. It will be appreciated that embodiments of process 701 may execute processes of its processing blocks in a different order than the one illustrated or in parallel with other processing blocks when possible. Thus, a generational thread scheduler may allocate a shared processor execution resource fairly over each generation of requests among requesting threads of executable instructions contending for access to the shared resource. Such a mechanism may avoid unbalanced degradation in performance for some threads due to unfair allocation of shared processor execution resources during periods of contention for access to those resources. The above description is intended to illustrate preferred embodiments of the present invention. From the discussion above it should also be apparent that especially in such an area of technology, where growth is fast and further advancements are not easily foreseen, the invention may be modified in arrangement and detail by those skilled in the art without departing from the principles of the present invention within the scope of the accompanying claims and their equivalents. |
Lithographic apparatuses suitable for, and methodologies involving, complementary e-beam lithography (CEBL) are described. In an example, a layout for a metallization layer of an integrated circuit includes a first region having a plurality of unidirectional lines of a first width and a first pitch and parallel with a first direction. The layout also includes a second region having a plurality of unidirectional lines of a second width and a second pitch and parallel with the first direction, the second width and the second pitch different than the first width and the first pitch, respectively. The layout also includes a third region having a plurality of unidirectional lines of a third width and a third pitch and parallel with the first direction, the third width and the third pitch different than the first and second widths and different than the first and second pitches. |
CLAIMSWhat is claimed is:1. A layout for a metallization layer of an integrated circuit, the layout comprising: a first region having a plurality of unidirectional lines of a first width and a first pitch and parallel with a first direction;a second region having a plurality of unidirectional lines of a second width and a second pitch and parallel with the first direction, the second width and the second pitch different than the first width and the first pitch, respectively; anda third region having a plurality of unidirectional lines of a third width and a third pitch and parallel with the first direction, the third width and the third pitch different than the first and second widths and different than the first and second pitches.2. The layout of claim 1, wherein, in a second direction orthogonal to the first direction, the plurality of unidirectional lines of the second region do not overlap with the plurality of unidirectional lines of the first region, and the plurality of unidirectional lines of the third region do not overlap with the plurality of unidirectional lines of the first region or with the plurality of unidirectional lines of the second region.3. The layout of claim 1, wherein, in a second direction orthogonal to the first direction, a portion of the plurality of unidirectional lines of the second region overlap with the plurality of unidirectional lines of the first region.4. The layout of claim 3, wherein the plurality of unidirectional lines of the second region is interdigitated with the plurality of unidirectional lines of the first region. 5. The layout of claim 1, wherein the second width is 1.5 times the first width and the second pitch is 1.5 times the first pitch, and wherein the third width is 3 times the first width and the third pitch is 3 times the first pitch.6. The layout of claim 1, wherein the first region is a logic region, the second region is an analog/IO region, and the third region is an SRAM region. 7. The layout of claim 1 , wherein none of the first, second or third regions the layout includes lines having jogs, orthogonal direction lines, or hooks.8. A metallization layer of an integrated circuit, the metallization layer comprising: a first region having a plurality of unidirectional wires of a first width and a first pitch and parallel with a first direction;a second region having a plurality of unidirectional wires of a second width and a second pitch and parallel with the first direction, the second width and the second pitch different than the first width and the first pitch, respectively; anda third region having a plurality of unidirectional wires of a third width and a third pitch and parallel with the first direction, the third width and the third pitch different than the first and second widths and different than the first and second pitches. 9. The metallization layer of claim 8, wherein, in a second direction orthogonal to the first direction, the plurality of unidirectional wires of the second region do not overlap with the plurality of unidirectional wires of the first region, and the plurality of unidirectional wires of the third region do not overlap with the plurality of unidirectional wires of the first region or with the plurality of unidirectional wires of the second region.10. The metallization layer of claim 8, wherein, in a second direction orthogonal to the first direction, a portion of the plurality of unidirectional wires of the second region overlap with the plurality of unidirectional wires of the first region.11. The metallization layer of claim 10, wherein the plurality of unidirectional wires of the second region is interdigitated with the plurality of unidirectional wires of the first region. 12. The metallization layer of claim 8, wherein the second width is 1.5 times the first width and the second pitch is 1.5 times the first pitch, and wherein the third width is 3 times the first width and the third pitch is 3 times the first pitch.13. The metallization layer of claim 8, wherein the first region is a logic region, the second region is an analog/IO region, and the third region is an SRAM region.14. The metallization layer of claim 8, wherein none of the first, second or third regions the layout includes wires having jogs, orthogonal direction wires, or hooks. 15. A method of forming a pattern for a semiconductor structure, the method comprising:forming a pattern of lines above a substrate, the pattern of lines comprising: a first region having a plurality of unidirectional lines of a first width and a first pitch and parallel with a first direction;a second region having a plurality of unidirectional lines of a second width and a second pitch and parallel with the first direction, the second width and the second pitch different than the first width and the first pitch, respectively; anda third region having a plurality of unidirectional lines of a third width and a third pitch and parallel with the first direction, the third width and the third pitch different than the first and second widths and different than the first and second pitches;aligning the substrate in an e-beam tool to provide the pattern of lines parallel with a scan direction of the e-beam tool, the scan direction orthogonal to the first direction; andforming a pattern of cuts in or above the pattern of lines to provide line breaks for the pattern of lines by scanning the substrate along the scan direction.16. The method of claim 15, wherein forming the pattern of cuts comprises using a three beam staggered blanker aperture array. 17. The method of claim 15, wherein forming the pattern of cuts comprises using a universal cutter blanker aperture array.18. The method of claim 15, wherein forming the pattern of cuts comprises using a non-universal cutter blanker aperture array.19. The method of claim 15, wherein forming the pattern of lines comprises using a pitch halving or pitch quartering technique.20. The method of claim 15, wherein forming the pattern of cuts comprises exposing regions of a layer of photo-resist material. |
UNIDIRECTIONAL METAL ON LAYER WITH EBEAMCROSS-REFERENCE TO RELATED APPLICATIONS[0001] This application claims the benefit of U.S. Provisional Application No. 62/012,220, filed on June 13, 2014, the entire contents of which are hereby incorporated by reference herein.TECHNICAL FIELD[0002] Embodiments of the invention are in the field of lithography and, in particular, lithography involving complementary e-beam lithography (CEBL).BACKGROUND[0003] For the past several decades, the scaling of features in integrated circuits has been a driving force behind an ever-growing semiconductor industry. Scaling to smaller and smaller features enables increased densities of functional units on the limited real estate of semiconductor chips.[0004] Integrated circuits commonly include electrically conductive microelectronic structures, which are known in the art as vias. Vias can be used to electrically connect metal lines above the vias to metal lines below the vias. Vias are typically formed by a lithographic process. Representatively, a photoresist layer may be spin coated above a dielectric layer, the photoresist layer may be exposed to patterned actinic radiation through a patterned mask, and then the exposed layer may be developed in order to form an opening in the photoresist layer. Next, an opening for the via may be etched in the dielectric layer by using the opening in the photoresist layer as an etch mask. This opening is referred to as a via opening.Finally, the via opening may be filled with one or more metals or other conductive materials to form the via.[0005] In the past, the sizes and the spacing of vias has progressively decreased, and it is expected that in the future the sizes and the spacing of the vias will continue to progressively decrease, for at least some types of integrated circuits (e.g., advanced microprocessors, chipset components, graphics chips, etc.). One measure of the size of the vias is the critical dimension of the via opening. One measure of the spacing of the vias is the via pitch. Via pitch represents the center- to-center distance between the closest adjacent vias. When patterning extremely small vias with extremely small pitches by such lithographic processes, several challenges present themselves.[0006] One such challenge is that the overlay between the vias and the overlying metal lines, and the overlay between the vias and the underlying metal lines, generally needs to be controlled to high tolerances on the order of a quarter of the via pitch. As via pitches scale ever smaller over time, the overlay tolerances tend to scale with them at an even greater rate than lithographic equipment is able to scale with.[0007] Another such challenge is that the critical dimensions of the via openings generally tend to scale faster than the resolution capabilities of lithographic scanners. Shrink technologies exist to shrink the critical dimensions of the via openings. However, the shrink amount tends to be limited by the minimum via pitch, as well as by the ability of the shrink process to be sufficiently optical proximity correction (OPC) neutral, and to not significantly compromise line width roughness (LWR) and/or critical dimension uniformity (CDU).[0008] Yet another such challenge is that the LWR and/or CDUcharacteristics of photoresists generally need to improve as the critical dimensions of the via openings decrease in order to maintain the same overall fraction of the critical dimension budget. However, currently the LWR and/or CDU characteristics of most photoresists are not improving as rapidly as the critical dimensions of the via openings are decreasing. A further such challenge is that the extremely small via pitches generally tend to be below the resolution capabilities of even extreme ultraviolet (EUV) lithographic scanners. As a result, commonly two, three, or more different lithographic masks may have to be used, which tends to increase the fabrication costs. At some point, if pitches continue to decrease, it may not be possible, even with multiple masks, to print via openings for these extremely small pitches using conventional scanners.[0009] In the same vein, the fabrication of cuts (i.e., disruptions) in the metal line structures associated with metal vias is faced with similar scaling issues. [0010] Thus, improvements are needed in the area of lithographic processing technologies and capabilities.BRIEF DESCRIPTION OF THE DRAWINGS[0011] Figure 1 A illustrates a cross-sectional view of a starting structure following deposition, but prior to patterning, of a hardmask material layer formed on an interlayer dielectric (ILD) layer.[0012] Figure IB illustrates a cross-sectional view of the structure of Figure1 A following patterning of the hardmask layer by pitch halving.[0013] Figure 2 illustrates cross-sectional views in a spacer-based-sextuple- patterning (SBSP) processing scheme which involves pitch division by a factor of six.[0014] Figure 3 illustrates cross-sectional views in a spacer-based-nonuple- patterning (SBNP) processing scheme which involves pitch division by a factor of nine.[0015] Figure 4 is a cross-sectional schematic representation of an ebeam column of an electron beam lithography apparatus.[0016] Figure 5 is a schematic demonstrating an optical scanner overlay limited by its ability to model in plane grid distortions (IPGD).[0017] Figure 6 is a schematic demonstrating distorted grid information using an align on the fly approach, in accordance with an embodiment of the present invention.[0018] Figure 7 provides a sample calculation showing the information to be transferred to pattern a general/conventional layout at 50% density on a 300 mm wafer in contrast to a via pattern at 5% density, in accordance with an embodiment of the present invention.[0019] Figure 8 illustrates a gridded layout approach for simplified design rule locations for vias, and cut start/stop, in accordance with an embodiment of the present invention.[0020] Figure 9 illustrates the allowable placement of cuts, in accordance with an embodiment of the present invention. [0021] Figure 10 illustrates a via layout among lines A and B, in accordance with an embodiment of the present invention.[0022] Figure 11 illustrates a cut layout among lines A-E, in accordance with an embodiment of the present invention.[0023] Figure 12 illustrates a wafer having a plurality of die locations thereon and an overlying dashed box representing a wafer field of a single column, in accordance with an embodiment of the present invention.[0024] Figure 13 illustrates a wafer having a plurality of die locations thereon and an overlying actual target wafer field of a single column and increased peripheral area for on the fly correction, in accordance with an embodiment of the present invention.[0025] Figure 14 demonstrates the effect of a few degree wafer rotation on the area to be printed (inner dark, thin dashed) against the original target area (inner light, thick dashed), in accordance with an embodiment of the present invention.[0026] Figure 15 illustrates a plan view of horizontal metal lines as represented overlaying vertical metal lines in the previous metallization layer, in accordance with an embodiment of the present invention.[0027] Figure 16 illustrates a plan view of horizontal metal lines as represented overlaying vertical metal lines in the previous metallization layer, where metal lines of differing width/pitch overlap in a vertical direction, in accordance with an embodiment of the present invention.[0028] Figure 17 illustrates a plan view of conventional metal lines as represented overlaying vertical metal lines in the previous metallization layer.[0029] Figure 18 illustrates an aperture (left) of a BAA relative to a line (right) to be cut or to have vias placed in targeted locations while the line is scanned under the aperture.[0030] Figure 19 illustrates two non-staggered apertures (left) of a BAA relative to two lines (right) to be cut or to have vias placed in targeted locations while the lines are scanned under the apertures.[0031] Figure 20 illustrates two columns of staggered apertures (left) of aBAA relative to a plurality of lines (right) to be cut or to have vias placed in targeted locations while the lines are scanned under the apertures, with scanning direction shown by the arrow, in accordance with an embodiment of the present invention.[0032] Figure 21 A illustrates two columns of staggered apertures (left) of aBAA relative to a plurality of lines (right) having cuts (breaks in the horizontal lines) or vias (filled-in boxes) patterned using the staggered BAA, with scanning direction shown by the arrow, in accordance with an embodiment of the present invention.[0033] Figure 21B illustrates a cross-sectional view of a stack of metallization layers in an integrated circuit based on metal line layouts of the type illustrated in Figure 21 A, in accordance with an embodiment of the present invention.[0034] Figure 22 illustrates apertures of a BAA having a layout of three different staggered arrays, in accordance with an embodiment of the present invention.[0035] Figure 23 illustrates apertures of a BAA having a layout of three different staggered arrays, where the ebeam covers only one of the arrays, in accordance with an embodiment of the present invention.[0036] Figure 24A includes a cross-sectional schematic representation of an ebeam column of an electron beam lithography apparatus having a deflector to shift the beam, in accordance with an embodiment of the present invention.[0037] Figure 24B illustrates a three (or up to n) pitch array for a BAA 2450 having pitch #1 , cut #1 , a pitch # 2, cut # 2 and a pitch # N, cut # N, in accordance with an embodiment of the present invention.[0038] Figure 24C illustrates a zoom in slit for inclusion on an ebeam column, in accordance with an embodiment of the present invention.[0039] Figure 25 illustrates apertures of a BAA having a layout of three different pitch staggered arrays, where the ebeam covers all of the arrays, in accordance with an embodiment of the present invention.[0040] Figure 26 illustrates a three beam staggered aperture array (left) of a BAA relative to a plurality of large lines (right) having cuts (breaks in the horizontal lines) or vias (filled-in boxes) patterned using the BAA, with scanning direction shown by the arrow, in accordance with an embodiment of the present invention. [0041] Figure 27 illustrates a three beam staggered aperture array (left) of aBAA relative to a plurality of medium sized lines (right) having cuts (breaks in the horizontal lines) or vias (filled-in boxes) patterned using the BAA, with scanning direction shown by the arrow, in accordance with an embodiment of the present invention.[0042] Figure 28 illustrates a three beam staggered aperture array (left) of aBAA relative to a plurality of small lines (right) having cuts (breaks in the horizontal lines) or vias (filled-in boxes) patterned using the BAA, with scanning direction shown by the arrow, in accordance with an embodiment of the present invention.[0043] Figure 29A illustrates a three beam staggered aperture array (left) of aBAA relative to a plurality of lines of varying size (right) having cuts (breaks in the horizontal lines) or vias (filled-in boxes) patterned using the BAA, with scanning direction shown by the arrow, in accordance with an embodiment of the present invention.[0044] Figure 29B illustrates a cross-sectional view of a stack of metallization layers in an integrated circuit based on metal line layouts of the type illustrated in Figure 29A, in accordance with an embodiment of the present invention.[0045] Figure 30 illustrates a three beam staggered aperture array (left) of aBAA relative to a plurality of lines of varying size (right) having cuts (breaks in the horizontal lines) or vias (filled-in boxes) patterned using the BAA, with scanning direction shown by the arrow, in accordance with an embodiment of the present invention.[0046] Figure 31 illustrates three sets of lines of differing pitch with overlying corresponding apertures on each line, in accordance with an embodiment of the present invention.[0047] Figure 32 illustrates a plurality of different sized lines (right) including one very large line, and a beam aperture arrays vertical pitch layout (three arrays) on a common grid, in accordance with an embodiment of the present invention. [0048] Figure 33 illustrates a plurality of different sized lines (right), and a universal cutter pitch array (left), in accordance with an embodiment of the present invention.[0049] Figure 34 demonstrates the 2* EPE rule for a universal cutter (left) as referenced against two lines (right), in accordance with an embodiment of the present invention.[0050] Figure 35 illustrates a plan view and corresponding cross-sectional view of a previous layer metallization structure, in accordance with an embodiment of the present invention.[0051] Figure 36A illustrates a cross-sectional view of a non-planar semiconductor device having fins, in accordance with an embodiment of the present invention.[0052] Figure 36B illustrates a plan view taken along the a-a' axis of the semiconductor device of Figure 36A, in accordance with an embodiment of the present invention.[0053] Figure 37 illustrates a computing device in accordance with one implementation of the invention.[0054] Figure 38 illustrates a block diagram of an exemplary computer system, in accordance with an embodiment of the present invention.[0055] Figure 39 is an interposer implementing one or more embodiments of the invention.[0056] Figure 40 is a computing device built in accordance with an embodiment of the invention. DESCRIPTION OF THE EMBODIMENTS[0057] Lithographic apparatuses suitable for, and methodologies involving, complementary e-beam lithography (CEBL) are described. In the following description, numerous specific details are set forth, such as specific tooling, integration and material regimes, in order to provide a thorough understanding of embodiments of the present invention. It will be apparent to one skilled in the art that embodiments of the present invention may be practiced without these specific details. In other instances, well-known features, such as single or dual damascene processing, are not described in detail in order to not unnecessarily obscure embodiments of the present invention. Furthermore, it is to be understood that the various embodiments shown in the Figures are illustrative representations and are not necessarily drawn to scale. In some cases, various operations will be described as multiple discrete operations, in turn, in a manner that is most helpful in understanding the present invention, however, the order of description should not be construed to imply that these operations are necessarily order dependent. In particular, these operations need not be performed in the order of presentation.[0058] One or more embodiments described herein are directed to lithographic approaches and tooling involving or suitable for complementary e-beam lithography (CEBL), including semiconductor processing considerations when implementing such approaches and tooling.[0059] Complementary lithography draws on the strengths of two lithography technologies, working hand-in-hand, to lower the cost of patterning critical layers in logic devices at 20nm half -pitch and below, in high- volume manufacturing (HVM). The most cost-effective way to implement complementary lithography is to combine optical lithography with e-beam lithography (EBL). The process of transferring integrated circuit (IC) designs to the wafer entails the following: optical lithography to print unidirectional lines (either strictly unidirectional or predominantly unidirectional) in a pre-defined pitch, pitch division techniques to increase line density, and EBL to "cut" the lines. EBL is also used to pattern other critical layers, notably contact and via holes. Optical lithography can be used alone to pattern other layers. When used to complement optical lithography, EBL is referred to as CEBL, or complementary EBL. CEBL is directed to cutting lines and holes. By not attempting to pattern all layers, CEBL plays acomplementary but crucial role in meeting the industry's patterning needs at advanced (smaller) technology nodes (e.g., lOnm or smaller such as 7nm or 5nm technology nodes). CEBL also extends the use of current optical lithography technology, tools and infrastructure.[0060] As mentioned above, pitch division techniques can be used to increase a line density prior to using EBL to cut such lines. In a first example, pitch halving can be implemented to double the line density of a fabricated grating structure. Figure 1A illustrates a cross-sectional view of a starting structure following deposition, but prior to patterning, of a hardmask material layer formed on an interlayer dielectric (ILD) layer. Figure IB illustrates a cross-sectional view of the structure of Figure 1 A following patterning of the hardmask layer by pitch halving.[0061] Referring to Figure 1 A, a starting structure 100 has a hardmask material layer 104 formed on an interlayer dielectric (ILD) layer 102. A patterned mask 106 is disposed above the hardmask material layer 104. The patterned mask 106 has spacers 108 formed along sidewalls of features (lines) thereof, on the hardmask material layer 104.[0062] Referring to Figure IB, the hardmask material layer 104 is patterned in a pitch halving approach. Specifically, the patterned mask 106 is first removed. The resulting pattern of the spacers 108 has double the density, or half the pitch or the features of the mask 106. The pattern of the spacers 108 is transferred, e.g., by an etch process, to the hardmask material layer 104 to form a patterned hardmask 110, as is depicted in Figure IB. In one such embodiment, the patterned hardmask 110 is formed with a grating pattern having unidirectional lines. The grating pattern of the patterned hardmask 110 may be a tight pitch grating structure. For example, the tight pitch may not be achievable directly through conventional lithography techniques. Even further, although not shown, the original pitch may be quartered by a second round of spacer mask patterning. Accordingly, the grating-like pattern of the patterned hardmask 110 of Figure IB may have hardmask lines spaced at a constant pitch and having a constant width relative to one another. The dimensions achieved may be far smaller than the critical dimension of the lithographic technique employed.[0063] Accordingly, as a first portion of a CEBL integration scheme, a blanket film may be patterned using lithography and etch processing which may involve, e.g., spacer-based-double-patterning (SBDP) or pitch halving, or spacer- based-quadruple-patterning (SBQP) or pitch quartering. It is to be appreciated that other pitch division approaches may also be implemented.[0064] For example, Figure 2 illustrates cross-sectional views in a spacer- based-sextuple-patterning (SBSP) processing scheme which involves pitch division by a factor of six. Referring to Figure 2, at operation (a), a sacrificial pattern X is shown following litho, slim and etch processing. At operation (b), spacers A and B are shown following deposition and etching. At operation (c), the pattern of operation (b) is shown following spacer A removal. At operation (d), the pattern of operation (c) is shown following spacer C deposition. At operation (e), the pattern of operation (d) is shown following spacer C etch. At operation (f), a pitch/6 pattern is achieved following sacrificial pattern X removal and spacer B removal.[0065] In another example, Figure 3 illustrates cross-sectional views in a spacer-based-nonuple-patterning (SBNP) processing scheme which involves pitch division by a factor of nine. Referring to Figure 3, at operation (a), a sacrificial pattern X is shown following litho, slim and etch processing. At operation (b), spacers A and B are shown following deposition and etching. At operation (c), the pattern of operation (b) is shown following spacer A removal. At operation (d), the pattern of operation (c) is shown following spacer C and D deposition and etch. At operation (e), a pitch/9 pattern is achieved following spacer C removal.[0066] In any case, in an embodiment, complementary lithography as described herein involves first fabricating a gridded layout by conventional or state- of the-art lithography, such as 193nm immersion lithography (193i). Pitch division may be implemented to increase the density of lines in the gridded layout by a factor of n. Gridded layout formation with 193i lithography plus pitch division by a factor of n can be designated as 193i + P/n Pitch Division. Patterning of the pitch divided gridded layout may then be patterned using electron beam direct write (EBDW) "cuts," as is described in greater detail below. In one such embodiment, 193nm immersion scaling can be extended for many generations with cost effective pitch division. Complementary EBL is used to break gratings continuity and to pattern vias.[0067] More specifically, embodiments described herein are directed to patterning features during the fabrication of an integrated circuit. In one embodiment, CEBL is used to pattern openings for forming vias. Vias are metal structures used to electrically connect metal lines above the vias to metal lines below the vias. In another embodiment, CEBL is used to form non-conductive spaces or interruptions along the metal lines. Conventionally, such interruptions have been referred to as "cuts" since the process involved removal or cutting away of portions of the metal lines. However, in a damascene approach, the interruptions may be referred to as "plugs" which are regions along a metal line trajectory that are actually not metal at any stage of the fabrication scheme, but are rather preserved regions where metal cannot be formed. In either case, however, use of the terms cuts or plugs may be done so interchangeably. Via opening and metal line cut or plug formation is commonly referred to as back end of line (BEOL) processing for an integrated circuit. In another embodiment, CEBL is used for front end of line (FEOL) processing. For example, the scaling of active region dimensions (such as fin dimensions) and/or associated gate structures can be performed using CEBL techniques as described herein.[0068] As described above, electron beam (ebeam) lithography may be implemented to complement standard lithographic techniques in order to achieved desired scaling of features for integrated circuit fabrication. An electron beam lithography tool may be used to perform the ebeam lithography. In an exemplary embodiment, Figure 4 is a cross-sectional schematic representation of an ebeam column of an electron beam lithography apparatus.[0069] Referring to Figure 4, an ebeam column 400 includes an electron source 402 for providing a beam of electrons 404. The beam of electrons 404 is passed through a limiting aperture 406 and, subsequently, through high aspect ratio illumination optics 408. The outgoing beam 410 is then passed through a slit 412 and may be controlled by a slim lens 414, e.g., which may be magnetic. Ultimately, the beam 404 is passed through a shaping aperture 416 (which may be a one- dimensional (1-D) shaping aperture) and then through a blanker aperture array (BAA) 418. The BAA 418 includes a plurality of physical apertures therein, such as openings formed in a thin slice of silicon. It may be the case that only a portion of the BAA 418 is exposed to the ebeam at a given time. Alternatively, or in conjunction, only a portion 420 of the ebeam 404 that passes through the BAA 418 is allowed to pass through a final aperture 422 (e.g., beam portion 421 is shown as blocked) and, possibly, a stage feedback deflector 424.[0070] Referring again to Figure 4, the resulting ebeam 426 ultimately impinges as a spot 428 on a surface of a wafer 430, such as a silicon wafer used in IC manufacture. Specifically, the resulting ebeam may impinge on a photo-resist layer on the wafer, but embodiments are not so limited. A stage scan 432 moves the wafer 430 relative to the beam 426 along the direction of the arrow 434 shown in Figure 4. It is to be appreciated that an ebeam tool in its entirely may include numerous columns 400 of the type depicted in Figure 4. Also, as described in some embodiments below, the ebeam tool may have an associated base computer, and each column may further have a corresponding column computer.[0071] One drawback of state-of-the-art e-beam lithography is that it is not readily adoptable into a high volume manufacturing (HVM) environment for advanced integrated circuit manufacturing. Today's e-beam tooling and associated methodology has proven to be too slow with respect to throughput requirements for HVM wafer processing. Embodiments described herein are directed to enabling the use of EBL in an HVM environment. In particular, many embodiments described herein enable improved throughput in an EBL tool to allow for the use of EBL in an HVM environment.[0072] Described below are seven different aspects of embodiments that can improve EBL beyond its current capabilities. It is to be appreciated that, although broken out as seven distinct aspects of embodiments, embodiments described below may be used independently or in any suitable combination to achieve improvements in EBL throughput for an HVM environment. As described in greater detail below, in a first aspect, alignment considerations for a wafer subjected to ebeam patterning on an ebeam tool are addressed. In a second aspect, data compression or data reduction for ebeam tool simplification is described. In a third aspect, the implementation of regions of uniform metal or other grating pattern density for an integrated circuit layout is described. In a fourth aspect, a staggered blanker aperture array (BAA) for an ebeam tool is described. In a fifth aspect, a three beam aperture array for an ebeam tool is described. In a sixth aspect, a non-universal cutter for an ebeam tool is described. In a seventh aspect, a universal cutter for an ebeam tool is described.[0073] For all aspects, in an embodiment, when referring below to openings or apertures in a blanker aperture array (BAA), all or some of the openings or apertures of the BAA can be switched open or "closed" (e.g., by beam deflecting) as the wafer/die moves underneath along a wafer travel or scan direction. In one embodiment, the BAA can be independently controlled as to whether each opening passes the ebeam through to the sample or deflects the beam into, e.g., a Faraday cup or blanking aperture. The ebeam column or apparatus including such a BAA may be built to deflect the overall beam coverage to just a portion of the BAA, and then individual openings in the BAA are electrically configured to pass the ebeam ("on") or not pass ("off). For example, un-deflected electrons pass through to the wafer and expose a resist layer, while deflected electrons are caught in the Faraday cup or blanking aperture. It is to be appreciated that reference to "openings" or "opening heights" refers to the spot size impinged on the receiving wafer and not to the physical opening in the BAA since the physical openings are substantially larger (e.g., micron scale) than the spot size (e.g., nanometer scale) ultimately generated from the BAA. Thus, when described herein as the pitch of a BAA or column of openings in a BAA being said to "correspond" to the pitch of metal lines, such description actually refers to the relationship between pitch of the impinging spots as generated from the BAA and the pitch of the lines being cut. As an example provided below, the spots generated from the BAA 2110 have a pitch the same as the pitch of the lines 2100 (when both columns of BAA openings are considered together). Meanwhile, the spots generated from only one column of the staggered array of the BAA 2110 have twice the pitch as the pitch of the lines 2100.[0074] For all aspects, it is also to be appreciated that, in someembodiments, an ebeam column as described above may also include other features in addition to those described in association with Figure 4. For example, in an embodiment, the sample stage can be rotated by 90 degrees to accommodate alternating metallization layers which may be printed orthogonally to one another (e.g., rotated between X and Y scanning directions). In another embodiment, an e- beam tool is capable of rotating a wafer by 90 degrees prior to loading the wafer on the stage. Other additional embodiments are described below in association with Figures 24A-24C.[0075] In a first aspect of embodiments of the present invention, alignment considerations for a wafer subjected to ebeam patterning on an ebeam tool are addressed. [0076] Approaches described below may be implemented to overcome excessive contribution to edge placement error (EPE) from layer to layer physical overlay when a layer is patterned by an imaging tool (e.g., an optical scanner). In an embodiment, the approaches described below are applicable for an imaging tool that otherwise uses preselected sampling of wafer coordinate system markers (i.e., alignment marks) to estimate wafer processing induced in-plane grid distortion parameters on a processed wafer. The collected alignment information (e.g., sampled wafer in plane grid distortion) is typically fit to a predefined order polynomial. The fit is then typically used as a representation of a distorted grid to adjust various scanner printing parameters and to achieve the best possible overlay between underlying and printed layers.[0077] Instead, in an embodiment, use of an ebeam for patterning allows for collection of alignment information during a write at any point on the pattern ("align on the fly") containing underlying layer features, and not only on every die. For example, an electron detector is placed at the ebeam column bottom in order to collect backscattered electrons from alignment marks or other underlying patterned feature. A straight forward linear model allows for collection of such information hundreds of time within every die as an ebeam column writes (and the detector detects) while the stage is scanning underneath the column during die exposure. In one such embodiment, there is no need for fitting polynominal and estimating complex correction parameters of higher orders. Rather, only simple linear corrections can be used.[0078] In an embodiment, in practice, multiple (hundreds) time positions of the ebeam can and will be registered against alignment marks patterned on a previous layer in scribe lines as well as inside active areas of the dies. The registering may be performed using drop in cells usually present for the purpose of characterizing patterning characteristics of a layer pattern to be exposed without loss of tool throughput of COO (cost of ownership).[0079] In the case that on-the-fly alignment is not implemented, the alternative is to use higher order polynomials, as described above. However, alignment based on higher order polynomials is used to fit relatively sparse alignment information (e.g., only 10-15% of dies locations to be patterned are used to collect in-plane grid distortions on the wafer), whereas un- modeled (residual) fit errors constitute about 50% of maximum total overlay predicted errors. Collecting much more dense alignment information and using even higher order polynominal for fit and patterning correction might improve overlay somewhat yet this will be achieved at significant throughput and cost of ownership loss.[0080] To provide context, wafer processing induced in-plane grid distortion occurs from multiple sources, including but not limited to backscatter/field displacement errors due to metal/other layers underneath the pattern being printed, wafer bowing/localized incremental wafer expansion due to pattern writing heat effects, and other additional effects that contribute greatly to EPE. If corrections are not made, the likelihood of patterning a wafer with localized gross patterning misalignment is very high.[0081] Figure 5 is a schematic demonstrating an optical scanner overlay limited by its ability to model in plane grid distortions (IPGD). Referring to the left- hand portion 502 of Figure 5, a die grid 504 on a wafer 506 is distorted by wafer processing. Vectors indicate corners displacement of every die versus the initial positioning (e.g., first layer print). Referring to the right-hand portion 510 of Figure 5, a conventional stepper will collect relatively sparse distorted grid information on this layer, as represented by the dots 512. Accordingly, using higher order polynomials allows fitting of relatively sparse alignment information. The number of locations is optimized for "acceptable" residuals after the model fits to grid representation obtained from grid coordinate information in the sampled locations. Overhead time is needed to collect this information.[0082] In contrast to the relatively sparse distorted grid information collected as represented in Figure 5, Figure 6 is a schematic demonstrating distorted grid information using an align on the fly approach, in accordance with an embodiment of the present invention. Referring to Figure 6, as an ebeam writes every die, the detector at the column bottom collects information about positional coordinated of an underlying layer. Necessary adjustment to writing position can be performed through stage position control in real time everywhere on the wafer at no or minimal overhead time increase or throughput loss. In particular, Figure 6 illustrates the same plot 602 as provided in Figure 5. A zoomed- in exemplary die region 604 illustrates the scanning directions 606 within the die region 604.[0083] In a second aspect of embodiments of the present invention, data compression or data reduction for ebeam tool simplification is described.[0084] Approaches described herein involve restricting data to allow massive compression of data, reducing a data path and ultimately providing for a much simpler ebeam writing tool. More particularly, embodiments described enable significant reduction in the amount of data that must be passed to an ebeam column of an ebeam tool. A practical approach is provided for allowing a sufficient amount of data to write the column field and adjust the column field for field edge placement error, while keeping within the electrical bandwidth limits of the physical hardware. Without implementing such embodiments, the required bandwidth is approximately 100 times that possible by today's electronics. In an embodiment, data reduction or compression approaches described herein can be implemented to substantially increase throughput capabilities of an EBL tool. By increasing the throughput capabilities, EBL can more readily be adopted in an HVM environment, such as into an integrated circuit manufacturing environment.[0085] Figure 7 provides a sample calculation showing the information to be transferred to pattern a general/conventional layout at 50% density on a 300 mm wafer in contrast to a via pattern at 5% density, in accordance with an embodiment of the present invention. Referring to Figure 7, information to be transferred is according to equation (A). Information transfer is according to equation (B) with information loss due to edge placement error (EPE) uncertainty (Ap) is minimal resolved feature, and APV is equal to 2EPE. Assuming EBDW tool resolution of AP is equal to lOnm and EPE is equal to 2.5 nm, the information volume to be transferred by such a general purpose imaging system in lm2(assuming 50% pattern density) will be according to equation (C). A 300 mm wafer area is 706cm2which is 0.0706m2. Correspondingly, to pattern a general layout at 50% density on a 300mm wafer, the number of bytes needed to be transferred is according to equation (D). The result is 70TB to be transferred in 6 minutes assuming lOwph TPT for a transfer rate of 194.4 GB/s. In accordance with an embodiment of the present invention, an EBDW tool that is designed to print vias (and/or cuts) at a pattern density of approximately 10% will require correspondingly smaller information to be transferred, e.g., at a realistic 40GB/s transfer rate. In a specific embodiment, an EBDW tool is designed to print vias (and/or cuts) at a pattern density ofapproximately 5% and requires correspondingly smaller information to be transferred, e.g., 7TB at a realistic 20GB/s transfer rate.[0086] With reference again to Figure 7, the information transfer is reduced to a relative (integerized) distance instead of transferring absolute 64bit coordinates. By using an ebeam tool to pattern only vias at less than approximately 10% density, and even as low as 5% density, versus a general layout pattern at 50% density, for example, a reduction in the amount of data transfer from 70+TB in 6 minutes to less than 7TB in 6 minutes can be realized, allowing the ebeam apparatus to achieve the manufacturing throughput needed for high volume production.[0087] In an embodiment, one or more of the following four approaches is implemented for data reduction: (1) all design rules for vias and cuts are simplified to reduce the number of positions that a via can occupy, and where the start and stop of a line cut is possibly located; (2) encryption of placement of cut starts and stops, as well as distances between vias, is encrypted as n*min distance (this removes the need to send 64 bit address for each start and stop location for a cut, as well as for via locations); (3) for each column in the tool, only the data required to make the cuts and vias that fall within this section of the wafer are forwarded to the column computer (each column receives only the data needed, in a form encrypted as in part 2); and/or (4) for each column in the tool, the area that is transmitted is increased by n lines at top, bottom and additional breadth in x is also allowed (accordingly, the associated column computer can adjust on the fly for changes in wafer temperature and alignment without having the entire wafer data transmitted). In an embodiment, implementation of one or more such data reduction approaches enablessimplification of an ebeam tool at least to some extent. For example, a dedicated computer or processor normally associated with a single dedicated column in a multi-column ebeam tool may be simplified or even altogether eliminated. That is, a single column equipped with on-board dedicated logic capability may be simplified to move the logic capability off -board or to reduce to amount of on-board logic capability required for each individual column of the ebeam tool. [0088] With respect to approach (1) above, Figure 8 illustrates a gridded layout approach for simplified design rule locations for vias, and cut start/stop, in accordance with an embodiment of the present invention. A horizontal grid 800 includes a regular arrangement of line positions, with solid lines 802 representing actual lines and dashed lines 804 representing unoccupied line positions. The key to this technique is that vias (filled-in boxes 806) are on a regular grid (shown as the vertical grid 808 in Figure 8) and are printed in the scan direction 810 parallel with the metal lines (horizontal rectangles with solid outline) that are below the vias. The requirement for this design system is that via locations 806 are formed only in alignment with the vertical grid 808.[0089] With respect to cuts, cuts are made with a grid that is finer than the via grid. Figure 9 illustrates the allowable placement of cuts, in accordance with an embodiment of the present invention. Referring to Figure 9, an array of lines 902 has vias 904 positioned therein according to grid 906. The allowable placement of cuts (e.g., labeled cuts 908, 910 and 912) is indicated by the vertical dashed lines 914, with the via locations continuing as vertical solid lines 906. The cuts always start, and stop, exactly on the grid 914, which is key to reducing the amount of data transferred from the base computer down to the column computer. It is to be appreciated, however, that the position of the dashed vertical lines 914 appears to be a regular grid, but that is not a requirement. Instead, the pair of lines centered around the via cut lines is the known distance of -xn and +xn relative to the via location. The via locations are a regular grid that is spaced every m units along the cut direction.[0090] With respect to approach (2) above, distance-based encryption of cuts and vias may be used to eliminate the need to send 64 bit full addresses. For example, rather than sending absolute 64 bit (or 128 bit) addresses for x, and y positions, the distance along the direction of travel from the left edge (for wafer lines printing in direction moving to right) or from the right edge (for wafer lines printing in the direction moving to the left) is encrypted. The pair of lines centered around the via lines is the known distance of -xn and -i-xn relative to the via location, and the via locations are a regular grid that is spaced every m units along the cut direction. Any via print location can thus be encrypted as a distance from zero to the numbered via location (spaced m units apart). This significantly reduces the amount of positioning data that must be transmitted.[0091] The amount of information can be further reduced by providing the machine with the relative count of vias from the previous via. Figure 10 illustrates a via layout among lines A and B, in accordance with an embodiment of the present invention. Referring to Figure 10, the two lines as shown can be reduced as follows: line A: via 1002 spacing +l,+4,+l,+2; line B: via 1004 spacing +9. The via 1002/1004 spacing is according to grid 1006. It is to be appreciated that additional communication theory of assignment of most likely terms could be further performed to reduce the data space. Even so, even ignoring such further reduction yields an excellent improvement using straight forward compression to reduce 4 vias of 64 bits position, to just a handful of bits.[0092] Similarly, the start and stop of cuts can be reduced to eliminate the need to send 64 bits (or 128 bits) of positional information for each cut. Like a light switch, starting a cut means the next data point is the end of cut, and similarly the next location is the start of the next cut. Since it is known that cuts end +xn in the direction of travel from via locations (and similarly start at -xn), depending upon cut start stop, the via location can be encoded and the local column computer can be instructed reapply the offset from the via location. Figure 11 illustrates a cut layout among lines A-E, in accordance with an embodiment of the present invention.Referring to Figure 11, a substantial decrease over sending absolute 64 (or 128) bit locations results: spacing from previous cut: A: +5 (shown as space 1102), +1; B: x <no cuts> (whatever x is encrypted as— no cuts for distance); C: +1 (the stopping point of the cut at the left), +4 (the start of the large cut aligned vertically with the start of cut 1102) +3 (the end of the large cut); D: +3, +4; E: +3, +2, +1 , +4.[0093] With respect to approach (3) above, for each column, the data transmitted for cuts and vias is restricted to just that required for the wafer field that falls under the given column. In an example, Figure 12 illustrates a wafer 1200 having a plurality of die locations 1202 thereon and an overlying dashed box 1204 representing a wafer field of a single column, in accordance with an embodiment of the present invention. Referring to Figure 12, the data transmitted to the local column computer is limited to only the lines that occur in the printed region shown in dotted lines of box 1204.[0094] With respect to approach (4) above, since correction for wafer bow, heating, and chuck misalignment by an angle theta must be done on the fly, the actual region transmitted to the column computer is a few lines larger top and bottom, as well as additional data to the left and right. Figure 13 illustrates a wafer 1300 having a plurality of die locations 1302 thereon and an overlying actual target wafer field 1304 of a single column. As shown in Figure 13, an increased peripheral area 1306 is provided to account for on the fly correction, in accordance with an embodiment of the present invention. Referring to Figure 13, while the increased peripheral area 1306 slightly increases the amount of data transmitted to the column computer, it also allows the column printing to correct for wafer misalignment resulting from a myriad of issues by allowing the column to print outside its normal region. Such issues may include wafer alignment issues or local heating issues, etc.[0095] Figure 14 demonstrates the effect of a few degree wafer rotation on the area to be printed (inner dark, thin dashed box 1402) against the original target area (inner light, thick dashed box 1304) from Figure 13, in accordance with an embodiment of the present invention. Referring to Figure 14, the column computer is able to use the additional transmitted data to make the necessary printing changes without requiring a complex rotational chuck on the machine (which would otherwise limit the speed of the printing).[0096] In a third aspect of embodiments of the present invention, the implementation of regions of uniform metal or other grating pattern density for an integrated circuit layout is described.[0097] In an embodiment, in order to improve throughput of an ebeam apparatus, design rules for interconnect layers are simplified to enable a fixed set of pitches that can be used for logic, SRAM, and Analog/IO regions on the die. In one such embodiment, the metal layout further requires that the wires be unidirectional with no jogs, orthogonal direction wires, or hooks on the ends, as is currently used to enable via landings in conventional, non-ebeam lithography processes.[0098] In a particular embodiment, three different wire widths of unidirectional wire are permitted within each metallization layer. Gaps in the wires are cut precisely, and all to the vias are self-aligned to a maximum allowed size. The latter is an advantage in minimizing via resistance for extremely fine pitch wiring. The approach described herein permits an efficient ebeam line cut and via printing with ebeam that achieves orders of magnitude improvement over existing ebeam solutions.[0099] Figure 15 illustrates a plan view of horizontal metal lines 1502 as represented overlaying vertical metal lines 1504 in the previous metallization layer, in accordance with an embodiment of the present invention. Referring to Figure 15, three different pitch/widths 1506, 1508 and 1510 of wires are permitted. The different line types may be segregated into chip regions 1512, 1514 and 1516, respectively, as shown. It is to be appreciated that regions are generally larger than shown, but to draw to scale would make the detail on the wires comparatively small. Such regions on the same layer may be fabricated first using conventional lithography techniques.[00100] The advances described in embodiments herein permit precise wire trimming and fully self-aligned vias between layers. It is to be appreciated that trims occur as needed with no trim-trim (plug) rules required as in current litho-based processes. Furthermore, in an embodiment, via-via rules are significantly removed. Vias of the density and relationship shown would be difficult or impossible to print using current optical proximity correction (OPC)-enabled lithography capability. Similarly, the plug/cut rules that would otherwise preclude some of the cuts shown are removed through use of this technique. As such, the interconnect/via layers are less limiting to the design of circuits.[00101] Referring again to Figure 15, in the vertical direction, lines of different pitches and widths are not overlapping, i.e., each region is segregated in a vertical direction. By contrast, Figure 16 illustrates a plan view of horizontal metal lines 1602 as represented overlaying vertical metal lines 1604 in the previous metallization layer, where metal lines of differing width/pitch overlap in a vertical direction, in accordance with an embodiment of the present invention. For example, lines pair 1606 overlap in the vertical direction, and lines pair 1608 overlap in the vertical direction. Referring again to Figure 16, the regions may be fully overlapping. The wires of all three sizes may be interdigitated, if enabled by the lines fabrication method, yet cuts and vias continue to be fully enabled by a universal cutter, as described below in association with another aspect of embodiments of the present invention.[00102] To provide context, Figure 17 illustrates a plan view of conventional metal lines 1702 as represented overlaying vertical metal lines in the previous metallization layer. Referring to Figure 17, in contrast to the layouts of Figures 15 and 16, bi-directional wires are used conventionally. Such wiring adds orthogonal wiring in the form of long orthogonal wires, short jogs between tracks to change lanes, and "hooks" at the ends of wires to place a via such that line pullback does not encroach the vias. Examples of such constructs are shown at the X positions in Figure 17. It could be argued that allowance of such orthogonal constructs provides some small density advantage (particularly the track jog at the upper X), but these significantly add design rule complexity/design rule checking as well as preclude a tool such as the ebeam methodology from achieving needed throughput. Referring again to Figure 17, it is to be appreciated that conventional OPC/lithography would preclude some of the vias shown on the left hand side from actually being fabricated.[00103] In a fourth aspect of embodiments of the present invention, a staggered blanker aperture array (BAA) for an ebeam tool is described.[00104] In an embodiment, a staggered beam aperture array is implemented to solve throughput of an ebeam machine while also enabling minimum wire pitch. With no stagger, consideration of edge placement error (EPE) means that a minimum pitch that is twice the wire width cannot be cut since there is no possibility of stacking vertically in a single stack. For example, Figure 18 illustrates an aperture 1800 of a BAA relative to a line 1802 to be cut or to have vias placed in targeted locations while the line is scanned along the direction of the arrow 1804 under the aperture 1800. Referring to Figure 18, for a given line 1802 to be cut or vias to be placed, the EPE 1806 of the cutter opening (aperture) results in a rectangular opening in the BAA grid that is the pitch of the line.[00105] Figure 19 illustrates two non-staggered apertures 1900 and 1902 of a BAA relative to two lines 1904 and 1906, respectively, to be cut or to have vias placed in targeted locations while the lines are scanned along the direction of the arrow 1908 under the apertures 1900 and 1902. Referring to Figure 19, when the rectangular opening 1800 of Figure 18 is placed in a vertical single column with other such rectangular openings (e.g., now as 1900 and 1902), the allowed pitch of the lines to be cut is limited by 2x EPE 1910 plus the distance requirement 1912 between the BAA opens 1900 and 1902 plus the width of one wire 1904 or 1906. The resulting spacing 1914 is shown by the arrow on the far right of Figure 19. Such a linear array would severely limit the pitch of the wiring to be substantially greater than 3-4x of the width of the wires, which may be unacceptable. Another unacceptable alternative would be to cut tighter pitch wires in two (or more) passes with slightly offset wire locations; this approach could severely limit the throughput of the ebeam machine.[00106] By contrast to Figure 19, Figure 20 illustrates two columns 2002 and 2004 of staggered apertures 2006 of a BAA 2000 relative to a plurality of lines 2008 to be cut or to have vias placed in targeted locations while the lines 2008 are scanned along the direction 2010 under the apertures 2006, with scanning direction shown by the arrow, in accordance with an embodiment of the present invention. Referring to Figure 19, a staggered BAA 2000 includes two linear arrays 2002 and 2004, staggered spatially as shown. The two staggered arrays 2002 and 2004 cut (or place vias at) alternate lines 2008. The lines 2008 are, in one embodiment, placed on a tight grid at twice the wire width. As used throughout the present disclosure, the term staggered array can refer to a staggering of openings 2006 that stagger in one direction (e.g., the vertical direction) and either have no overlap or have some overlap when viewed as scanning in the orthogonal direction (e.g., the horizontal direction). In the latter case, the effective overlap provides for tolerance in misalignment.[00107] It is to be appreciated that, although a staggered array is shown herein as two vertical columns for simplicity, the openings or apertures of a single "column" need not be columnar in the vertical direction. For example, in an embodiment, so long as a first array collectively has a pitch in the vertical direction, and a second array staggered in the scan direction from the first array collectively has the pitch in the vertical direction, the a staggered array is achieved. Thus, reference to or depiction of a vertical column herein can actually be made up of one or more columns unless specified as being a single column of openings or apertures. In one embodiment, in the case that a "column" of openings is not a single column of openings, any offset within the "column" can be compensated with strobe timing. In an embodiment, the critical point is that the openings or apertures of a staggered array of a BAA lie on a specific pitch in the first direction, but are offset in the second direction to allow them to place cuts or vias without any gap between cuts or vias in the first direction.[00108] Thus, one or more embodiments are directed to a staggered beam aperture array where openings are staggered to allow meeting EPE cuts and/or via requirements as opposed to an inline arrangement that cannot accommodate for EPE technology needs. By contrast, with no stagger, the problem of edge placement error (EPE) means that a minimum pitch that is twice the wire width cannot be cut since there is no possibility of stacking vertically in single stack. Instead, in an embodiment, use of a staggered BAA enables much greater than 4000 times faster than individually ebeam writing each wire location. Furthermore, a staggered array allows a wire pitch to be twice the wire width. In a particular embodiment, an array has 4096 staggered openings over two columns such that EPE for each of the cut and via locations can be made. It is to be appreciated that a staggered array, as contemplated herein, may include two or more columns of staggered openings.[00109] In an embodiment, use of a staggered array leaves space for including metal around the apertures of the BAA which contain one or two electrodes for passing or steering the ebeam to the wafer or steering to a Faraday cup or blanking aperture. That is, each opening may be separately controlled by electrodes to pass or deflect the ebeam. In one embodiment, the BAA has 4096 openings, and the ebeam apparatus covers the entire array of 4096 openings, with each opening electrically controlled. Throughput improvements are enabled by sweeping the wafer under the opening as shown by the thick black arrows.[00110] In a particular embodiment, a staggered BAA has two rows of staggered BAA openings. Such an array permits tight pitch wires, where wire pitch can be 2x the wire width. Furthermore, all wires can be cut in a single pass (or vias can be made in a single pass), thereby enabling throughput on the ebeam machine. Figure 21 A illustrates two columns of staggered apertures (left) of a BAA relative to a plurality of lines (right) having cuts (breaks in the horizontal lines) or vias (filled- in boxes) patterned using the staggered BAA, with scanning direction shown by the arrow, in accordance with an embodiment of the present invention.[00111] Referring to Figure 21 A, the line result from a single staggered array could be as depicted, where lines are of single pitch, with cuts and vias patterned. In particular, Figure 21 A depicts a plurality of lines 2100 or open line positions 2102 where no lines exist. Vias 2104 and cuts 2106 may be formed along lines 2100. The lines 2100 are shown relative to a BAA 2110 having a scanning direction 2112. Thus, Figure 21A may be viewed as a typical pattern produced by a single staggered array. Dotted lines show where cuts occurred in the patterned lines (including total cut to remove a full line or line portion). The via locations 2104 are patterning vias that land on top of the wires 2100.[00112] In an embodiment, all or some of the openings or apertures of the BAA 2110 can be switched open or "closed" (e.g., beam deflecting) as the wafer/die moves underneath along the wafer travel direction 2112. In an embodiment, the BAA can be independently controlled as to whether each opening passes the ebeam through to the sample or deflects the beam into, e.g., a Faraday cup or blanking aperture. The apparatus may be built to deflect the overall beam coverage to just a portion of the BAA, and then individual openings in the BAA are electrically configured to pass the ebeam ("on") or not pass ("off). It is to be appreciated that reference to "openings" or "opening heights" refers to the spot size impinged on the receiving wafer and not to the physical opening in the BAA since the physical openings are substantially larger (e.g., micron scale) than the spot size (e.g., nanometer scale) ultimately generated from the BAA. Thus, when described herein as the pitch of a BAA or column of openings in a BAA being said to "correspond" to the pitch of metal lines, such description actually refers to the relationship between pitch of the impinging spots as generated from the BAA and the pitch of the lines being cut. As an example, the spots generated from the BAA 2110 have a pitch the same as the pitch of the lines 2100 (when both columns of BAA openings are considered together). Meanwhile, the spots generated from only one column of the staggered array of the BAA 2110 have twice the pitch as the pitch of the lines 2100. [00113] It is also to be appreciated that an ebeam column that includes a staggered beam aperture array (staggered BAA) as described above may also include other features in addition to those described in association with Figure 4, some examples of which are further described in greater detail below in association with Figures 24A-24C. For example, in an embodiment, the sample stage can be rotated by 90 degrees to accommodate alternating metallization layers which may be printed orthogonally to one another (e.g., rotated between X and Y scanning directions). In another embodiment, an e-beam tool is capable of rotating a wafer by 90 degrees prior to loading the wafer on the stage.[00114] Figure 21B illustrates a cross-sectional view of a stack 2150 of metallization layers 2152 in an integrated circuit based on metal line layouts of the type illustrated in Figure 21 A, in accordance with an embodiment of the present invention. Referring to Figure 21B, in an exemplary embodiment, a metal cross- section for an interconnect stack 2150 is derived from a single BAA array for the lower eight matched metal layers 2154, 2156, 2158, 2160, 2162, 2164, 2166 and 2168. It is to be appreciated that upper thicker/wider metal lines 2170 and 2172 would not be made with the single BAA. Via locations 2174 are depicted as connecting the lower eight matched metal layers 2154, 2156, 2158, 2160, 2162, 2164, 2166 and 2168.[00115] In a fifth aspect of embodiments of the present invention, a three beam aperture array for an ebeam tool is described.[00116] In an embodiment, a beam aperture array is implemented to solve throughput of an ebeam machine while also enabling minimum wire pitch. As described above, with no stagger, the problem of edge placement error (EPE) means that a minimum pitch that is twice the wire width cannot be cut since there is no possibility of stacking vertically in single stack. Embodiments described below extend the staggered BAA concept to permit three separate pitches to be exposed on a wafer, either through three passes, or by illuminating/controlling all three beam aperture arrays simultaneously in a single pass. The latter approach may be preferable for achieving the best throughput.[00117] In some implementations, a three staggered beam aperture array is used instead of a single beam aperture array. The pitches of the three different arrays may either be related (e.g., 10-20-30) or unrelated pitches. The three pitches can be used in three separate regions on the target die, or the three pitches may occur simultaneously in the same localized region.[00118] To provide context, the use of two or more single arrays would require a separate ebeam apparatus, or a change out of the beam aperture array for each different hole size/wire pitch. The result would otherwise be a throughput limiter and/or a cost of ownership issue. Instead, embodiments described herein are directed to BAAs having more than one (e.g., three) staggered array. In one such embodiment (in the case of including three arrays on one BAA), three different arrays of pitches can be patterned on a wafer without loss of throughput.Furthermore, the beam pattern may be steered to cover one of the three arrays. An extension of this technique can be used to pattern any mixture of different pitches by turning on and off the blanker holes in all three arrays as needed.[00119] As an example, Figure 22 illustrates apertures of a BAA 2200 having a layout of three different staggered arrays, in accordance with an embodiment of the present invention. Referring to Figure 22, a three-column 2202, 2204 and 2206 blanker aperture array 2200 can be used for three different line pitches for cutting or making vias by all or some of the apertures 2208 which are switched open or "closed" (beam deflecting) as the wafer/die moves underneath along the wafer travel direction 2210. In one such embodiment, multiple pitches can be patterned without changing the BAA plate in the device. Furthermore, in a particular embodiment, multiple pitches can be printed at the same time. Both techniques allow many spots to be printed during a continuous pass of the wafer under the BAA. It is to be appreciated that while the focus of the description is on three separate columns of different pitches, embodiments can be extended to include any number of pitches that can fit within the apparatus, e.g., 1 , 2, 3, 4, 5, etc.[00120] In an embodiment, the BAA can be independently controlled as to whether each opening passes the ebeam or deflects the beam into a Faraday cup or blanking aperture. The apparatus may be built to deflect the overall beam coverage to just a single pitch column, and then individual openings in the pitch column are electrically configured to pass the ebeam ("on") or not pass ("off). As an example, Figure 23 illustrates apertures 2308 of a BAA 2300 having a layout of three different staggered arrays 2302, 2304 and 2306, where the ebeam covers only one of the arrays (e.g., array 2304), in accordance with an embodiment of the present invention. In such an apparatus configuration, throughput could be gained for specific areas on a die that contain only a single pitch. The direction of travel of the underlying wafer is indicated by arrow 2310.[00121] In one embodiment, in order to switch between pitch arrays, a deflector can be added to the ebeam column to allow the ebeam to be steerable onto the BAA pitch array. As an example, Figure 24A includes a cross-sectional schematic representation of an ebeam column of an electron beam lithography apparatus having a deflector to shift the beam, in accordance with an embodiment of the present invention. Referring to Figure 24A, an ebeam column 2400, such as described in association with Figure 4, includes a deflector 2402. The deflector can be used to shift the beam onto an appropriate pitch/cut row in a shaping aperture corresponding to an appropriate array of a BAA 2404 having multiple pitch arrays. As an example, Figure 24B illustrates a three (or up to n) pitch array for a BAA2450 having pitch #1 , cut #1 (2452), a pitch # 2, cut # 2 (2454) and a pitch # N, cut # N (2456). It is to be appreciated that the height of cut#n is not equal to the height of cut#n+m.[00122] Other features may also be included in the ebeam column 2400. For example, further referring to Figure 24A, in an embodiment, the stage can be rotated by 90 degrees to accommodate alternating metallization layers which may be printed orthogonally to one another (e.g., rotated between X and Y scanning directions). In another embodiment, an e-beam tool is capable of rotating a wafer by 90 degrees prior to loading the wafer on the stage. In yet another example, Figure 24C illustrates a zoom in slit 2460 for inclusion on an ebeam column. The positioning of such a zoom in slit 2460 on column 2400 is shown in Figure 24A. The zoom in slit 2460 may be included to keep efficiency for different cut heights. It is to be appreciated that one or more of the above described features may be included in a single ebeam column.[00123] In another embodiment, the ebeam fully illuminates multiple or all columns of pitches on the BAA. In such a configuration, all of the illuminated BAA openings would be electrically controlled to be "open" to pass the ebeam to the die, or "off to prevent the ebeam from reaching the die. The advantage of such an arrangement is that any combination of holes could be used to print line cuts or via locations without reducing throughput. While the arrangement described in association with Figures 23 and 24A-24C could also be used to produce a similar result, a separate pass across the wafer/die for each of the pitch arrays would be required (which would reduce throughput by a factor of 1/n, where n is the number of pitch arrays on the BAA that require printing).[00124] Figure 25 illustrates apertures of a BAA having a layout of three different pitch staggered arrays, where the ebeam covers all of the arrays, in accordance with an embodiment of the present invention. Referring to Figure 25, apertures 2508 of a BAA 2500 having a layout of three different staggered arrays 2502, 2504 and 2506, where the ebeam can cover all of the arrays (e.g., covers arrays 2502, 2504 and 2506), in accordance with an embodiment of the present invention. The direction of travel of the underlying wafer is indicated by arrow 2510.[00125] In either the case of Figure 23 or Figure 25, having three pitches of openings permits the cutting or via creation for three different line or wire widths. However, the lines must be in alignment with the apertures of the corresponding pitch array (by contrast, a universal cutter is disclosed below). Figure 26 illustrates a three beam staggered aperture array 2600 of a BAA relative to a plurality of large lines 2602 having cuts (e.g., breaks 2604 in the horizontal lines) or vias (filled-in boxes 2606) patterned using the BAA, with scanning direction shown by the arrow 2608, in accordance with an embodiment of the present invention. Referring to Figure 26, all the lines in a local region are of the same size (in this case, corresponding to the largest apertures 2610 on the right side of the BAA). Thus, Figure 26 illustrates a typical pattern produced by one of three staggered beam aperture arrays. Dotted lines show where cuts occurred in patterned lines. Dark rectangles are patterning vias that land on top of the lines/wires 2602. In this case, only the largest blanker array is enabled.[00126] Figure 27 illustrates a three beam staggered aperture array 2700 of a BAA relative to a plurality of medium sized lines 2702 having cuts (e.g., breaks 2704 in the horizontal lines) or vias (filled-in boxes 2706) patterned using the BAA, with scanning direction shown by the arrow 2708, in accordance with an embodiment of the present invention. Referring to Figure 27, all the lines in a local region are of the same size (in this case, corresponding to the medium sized apertures 2710 in the middle of the BAA). Thus, Figure 27 illustrates a typical pattern produced by one of three staggered beam aperture arrays. Dotted lines show where cuts occurred in patterned lines. Dark rectangles are patterning vias that land on top of the lines/wires 2702. In this case, only the medium blanker array is enabled.[00127] Figure 28 illustrates a three beam staggered aperture array 2800 of a BAA relative to a plurality of small lines 2802 having cuts (e.g., breaks 2804 in the horizontal lines) or vias (filled-in boxes 2806) patterned using the BAA, with scanning direction shown by the arrow 2808, in accordance with an embodiment of the present invention. Referring to Figure 28, all the lines in a local region are of the same size (in this case, corresponding to the smallest apertures 2810 on the left side of the BAA). Thus, Figure 28 illustrates a typical pattern produced by one of three staggered beam aperture arrays. Dotted lines show where cuts occurred in patterned lines. Dark rectangles are patterning vias that land on top of the lines/wires 2802. In this case, only the small blanker array is enabled.[00128] In another embodiment, combinations of the three pitches can be patterned, where the aperture alignment is possible against the lines already in these positions. Figure 29A illustrates a three beam staggered aperture array 2900 of a BAA relative to a plurality of lines 2902 of varying size having cuts (e.g., breaks 2904 in the horizontal lines) or vias (filled-in boxes 2906) patterned using the BAA, with scanning direction shown by the arrow 2908, in accordance with an embodiment of the present invention. Referring to Figure 29A, as many as three different metal widths can be patterned on the fixed grids 2950 that occur on the three-staggered BAA. The dark colored apertures 2910 of the BAA are being turned on/off during they scan. The light colored BAA apertures 2912 remain off. Thus, Figure 29A illustrates a typical pattern produced by simultaneous use of all three staggered beam aperture arrays. Dotted lines show where cuts occurred in patterned lines. Dark rectangles are patterning vias that land on top of the lines/wires 2902. In this case, the small blanker array, the medium blanker array and the large blanker array are all enabled.[00129] Figure 29B illustrates a cross-sectional view of a stack 2960 of metallization layers in an integrated circuit based on metal line layouts of the type illustrated in Figure 29A, in accordance with an embodiment of the present invention. Referring to Figure 29B, in an exemplary embodiment, a metal cross- section for an interconnect stack is derived from three BAA pitch arrays of lx, 1.5x and 3x pitch/width for the lower eight matched levels 2962, 2964, 2966, 2968, 2970, 2972, 2974 and 2976. For example, in level 2962, exemplary lines 2980 of lx, an exemplary line 2982 of 1.5x, and an exemplary line 2984 of 3x are called out. It is to be appreciated that the varying width for the metals can only be seen for those layers with lines coming out of the page. All metals in the same layer are the same thickness regardless of metal width. It is to be appreciated that upper thicker/wider metals would not be made with the same three pitch BAA.[00130] In another embodiment, different lines within the array can change width. Figure 30 illustrates a three beam staggered aperture array 3000 of a BAA relative to a plurality of lines 3002 of varying size having cuts (e.g., breaks 3004 in the horizontal lines) or vias (filled-in boxes 3006) patterned using the BAA, with scanning direction shown by the arrow 3008, in accordance with an embodiment of the present invention. Referring to Figure 30, the third horizontal line 3050 from the bottom of the array of lines 3002 has a wide line 3052 on a same grid line 3056 as a narrow line 3054. The corresponding different sized, but horizontally aligned, apertures 3060 and 3062 used to cut or make vias in the different sized lines are highlighted and horizontally centered with the two lines 3052 and 3054. Thus, Figure 30 illustrates a scenario with the additional possibility to change line widths during patterning, as well as within different regions.[00131] In a sixth aspect of embodiments of the present invention, a non- universal cutter for an ebeam tool is described.[00132] In an embodiment, the cutting of multiple pitches of wires in the same region is made possible. In a particular implementation, high throughput ebeam processing is used to define cuts with two BAA arrays each with opening heights equal to predetermined values. As an illustrative example, N(20nm-minimal layout pitch) and M(30nm) can cut multiple pitch layouts (N[20], M[30], N*2[40], N*3 or M*2[60], N*4[80], M*3[90]nm) etc. with required EPE tolerance of minimum pitch/4 (N/4) provided that cut/plug tracks are placed on grids.[00133] Figure 31 illustrates three sets of lines 3102, 3104 and 3106 of differing pitch with overlying corresponding apertures 3100 on each line, in accordance with an embodiment of the present invention. Referring to Figure 31 , a 40nm, 30nm and 20nm arrays vertical pitch is shown. For the 40nm pitch lines 3102, a staggered BAA (e.g., having 2048 openings) is available for cutting the lines. For the 30nm pitch lines 3104, a staggered BAA (e.g., having 2730 openings) is available for cutting the lines. For the 20nm pitch lines 3106, a staggered BAA (e.g., having 4096 openings) is available for cutting the lines. In this exemplary case, parallel lines drawn on a 10 nm step unidirectional grid 3150 with pitches 20nm, 30nm and 40nm need to be cut. The BAA has three pitches (i.e., three sub- arrays) and is axially aligned with drawn tracks 3160, as depicted in Figure 31.[00134] Provided each aperture on each of the three sub-arrays of Figure 31 has its own driver, cutting of complex layouts with tracks on a layout consistent with the depicted unidirectional grid can be performed with tool throughput independent of number and mix of pitches present in the layout. The result is that multiple cuts, multiple simultaneous cuts of different widths, and cuts of widths that are greater than any single pitch are made possible. The design may be referred to as pitch agnostic throughput. To provide context, such a result is not possible where multiple passes of the wafer are required for each pitch. It is to be appreciated that such an implementation is not restricted to three BAA opening sizes. Additional combinations could be produced as long as there is a common grid relationship between the various BAA pitches.[00135] Furthermore, in an embodiment, multiple cuts made at the same time are possible with multiple pitches, and wider lines are accommodated by combinations of different openings that completely cover the cut distance. For example, Figure 32 illustrates a plurality of different sized lines 3202 including one very large line 3204, and a beam aperture arrays vertical pitch layout 3206 (three arrays 3208, 3210 and 3212) on a common grid 3214, in accordance with an embodiment of the present invention. The very wide line 3204 is cut by a combination of three large apertures 3216 which are additive in the vertical direction. It is to be appreciated in viewing Figure 32, the wires 3202 are shown as being cut by various openings which are shown as dashed boxes (e.g., dashed boxes 3218 corresponding to apertures 3216).[00136] In a seventh aspect of embodiments of the present invention, a universal cutter for an ebeam tool is described.[00137] In an embodiment, high throughput ebeam processing is enabled by defining cuts such that a single (universal) BAA having opening heights equal to predetermined values can be used for a variety of line pitches/widths. In one such embodiment, the opening heights are targeted at half of the minimal pitch layout. It is to be appreciated that reference to "opening heights" refers to the spot size impinged on the receiving wafer and not to the physical opening in the BAA since the physical openings are substantially larger (e.g., micron scale) than the spot size (e.g., nanometer scale) ultimately generated from the BAA. In a particular example, the height of the openings is lOnm for a minimal layout pitch of N=20nm). In such a case, multiple pitch layouts (e.g., N[20], M[30], N*2[40], N*3 or M*2[60], N*4[80], M*3[90]nm) etc. can be cut. The cuts can be performed with a required EPE tolerance of minimum pitch/4 (N/4) provided cut/plug tracks are placed on a predetermined grid where tracks axes are aligned on a predetermined one- dimensional (ID) grid coincidental with the middle between two BAA openings.Each metal track adjacency is interrupted by exposing two openings at the minimum to satisfy an EPE requirement = pitch/4.[00138] In an example, Figure 33 illustrates a plurality of different sized lines 3302, and a universal cutter pitch array 3304, in accordance with an embodiment of the present invention. Referring to Figure 33, in a particular embodiment, a BAA having a lOnm pitch array 3304 with, e.g., 8192 openings (only a few of which are shown) is used as a universal cutter. It is to be appreciated that although shown on a common grid 3306, in one embodiment, the lines need not actually be aligned to a grid at all. In that embodiment, spacing is differentiated by the cutter openings.[00139] More generally, referring again to Figure 33, a beam aperture array 3304 includes an array of staggered square beam openings 3308 (e.g., 8192 staggered square beam openings) that can be implemented to cut any width line/wire 3302 by using one or more of the openings in conjunction in the vertical direction while the scan is performed along the horizontal direction 3310. The only restriction is that adjacent wires be 2*EPE for cutting any individual wire. In one embodiment, the wires are cut by combinations of universal cutter openings 3308 chosen on the fly from the BAA 3304. As an example, line 3312 is cut by three openings 3314 from the BAA 3304. In another example, line 3316 is cut by 11 openings 3318 from the BAA 3304.[00140] For comparison to a non-universal cutter, a grouping of arrays 3320 is illustrated in Figure 33. It is to be appreciated that the grouping of arrays 3320 is not present in the universal cutter, but are shown for comparison of the universal cutter to a non-universal cutter based on the grouping of arrays 3320.[00141] To provide context, other beam aperture array arrangements require openings that are specifically aligned on the centerline of the lines to be cut.Instead, in accordance with an embodiment herein, a universal aperture array technique allows universal cutting of any width line/wire on non-aligned line centerlines. Furthermore, changes in line widths (and spacings) that would otherwise be fixed by the BAA of other techniques are accommodated by the universal cutter. Accordingly, late changes to a fabrication process, or lines/wires specifically tailored to the RC needs of an individual circuit may be permitted.[00142] It is to be appreciated that as long as the EPE coverage requirement of pitch/4 is met, the various lines/wires do not have to be exactly aligned in a universal cutter scenario. The only restrictions is that sufficient enough space is provided between lines to have EPE/2 distance between lines with the cutter lining up at EPE/4 as follows. Figure 34 demonstrates the 2* EPE rule for a universal cutter 3400 as referenced against two lines 3402 and 3404, in accordance with an embodiment of the present invention. Referring to Figure 34, the EPE 3406 of the top line and the EPE 3408 of the bottom line provide the 2*EPE width which corresponds to the pitch of the universal cutter holes 3410. Thus, the rule for opening pitch corresponds to the minimum space between two lines. If the distance is greater than this, the cutter will cut any arbitrary width line. Note that the minimum hole size and pitch is exactly equal to 2*EPE for lines. [00143] In an embodiment, by using a universal cutter, the resulting structures can have random wire widths and placement in an ebeam-produced semiconductor sample. The random placement, however, is still described as unidirectional since no orthogonal lines or hooks are fabricated in this approach. A universal cutter can be implemented for cutting many different pitches and widths, e.g., whatever can be fabricated by patterning prior to ebeam patterning used for cuts and vias. As a comparison, the above described staggered array and three- staggered array BAAs are associated with fixed locations for the pitches.[00144] More generally, referring to all of the above aspects of embodiments of the present invention, it is to be appreciated that a metallization layer having lines with line cuts (or plugs) and having associated vias may be fabricated above a substrate and, in one embodiment, may be fabricated above a previous metallization layer. As an example, Figure 35 illustrates a plan view and corresponding cross- sectional view of a previous layer metallization structure, in accordance with an embodiment of the present invention. Referring to Figure 35, a starting structure 3500 includes a pattern of metal lines 3502 and interlayer dielectric (ILD) lines 3504. The starting structure 3500 may be patterned in a grating-like pattern with metal lines spaced at a constant pitch and having a constant width, as is depicted in Figure 35. Although not shown, the lines 3502 may have interruptions (i.e., cuts or plugs) at various locations along the lines. The pattern, for example, may be fabricated by a pitch halving or pitch quartering approach, as described above. Some of the lines may be associated with underlying vias, such as line 3502' shown as an example in the cross-sectional view.[00145] In an embodiment, fabrication of a metallization layer on the previous metallization structure of Figure 35 begins with formation of an interlayer dielectric (ILD) material above the structure 3500. A hardmask material layer may then be formed on the ILD layer. The hardmask material layer may be patterned to form a grating of unidirectional lines orthogonal to the lines 3502 of 3500. In one embodiment, the grating of unidirectional hardmask lines is fabricated using conventional lithography (e.g., photoresist and other associated layers) and may have a line density defined by a pitch-halving, pitch-quartering etc. approach as described above. The grating of hardmask lines leaves exposed a grating region of the underlying ILD layer. It is these exposed portions of the ILD layer that are ultimately patterned for metal line formation, via formation, and plug formation. For example, in an embodiment, via locations are patterned in regions of the exposed ILD using EBL as described above. The patterning may involve formation of a resist layer and patterning of the resist layer by EBL to provide via opening locations which may be etched into the ILD regions. The lines of overlying hardmask can be used to confine the vias to only regions of the exposed ILD, with overlap accommodated by the hardmask lines which can effectively be used as an etch stop. Plug (or cut) locations may also be patterned in exposed regions of the ILD, as confined by the overlying hardmask lines, in a separate EBL processing operation. The fabrication of cuts or plugs effectively preserve regions of ILD that will ultimately interrupt metal lines fabricated therein. Metal lines may then be fabricated using a damascene approach, where exposed portions of the ILD (those portions between the hardmask lines and not protected by a plug preservation layer, such as a resist layer patterned during "cutting") are partially recessed. The recessing may further extend the via locations to open metal lines from the underlying metallization structure. The partially recessed ILD regions are then filled with metal (a process which may also involve filling the via locations), e.g., by plating and CMP processing, to provide metal lines between the overlying hardmask lines. The hardmask lines may ultimately be removed for completion of a metallization structure. It is to be appreciated that the above ordering of line cuts, via formation, and ultimate line formation is provided only as an example. A variety of processing schemes may be accommodated using EBL cuts and vias, as described herein.[00146] In an embodiment, as used throughout the present description, interlayer dielectric (ILD) material is composed of or includes a layer of a dielectric or insulating material. Examples of suitable dielectric materials include, but are not limited to, oxides of silicon (e.g., silicon dioxide (S1O2)), doped oxides of silicon, fluorinated oxides of silicon, carbon doped oxides of silicon, various low-k dielectric materials known in the arts, and combinations thereof. The interlayer dielectric material may be formed by conventional techniques, such as, for example, chemical vapor deposition (CVD), physical vapor deposition (PVD), or by other deposition methods.[00147] In an embodiment, as is also used throughout the present description, interconnect material is composed of one or more metal or other conductive structures. A common example is the use of copper lines and structures that may or may not include barrier layers between the copper and surrounding ILD material. As used herein, the term metal includes alloys, stacks, and other combinations of multiple metals. For example, the metal interconnect lines may include barrier layers, stacks of different metals or alloys, etc. The interconnect lines are also sometimes referred to in the arts as traces, wires, lines, metal, or simplyinterconnect.[00148] In an embodiment, as is also used throughout the present description, hardmask materials are composed of dielectric materials different from the interlayer dielectric material. In some embodiments, a hardmask layer includes a layer of a nitride of silicon (e.g., silicon nitride) or a layer of an oxide of silicon, or both, or a combination thereof. Other suitable materials may include carbon-based materials. In another embodiment, a hardmask material includes a metal species. For example, a hardmask or other overlying material may include a layer of a nitride of titanium or another metal (e.g., titanium nitride). Potentially lesser amounts of other materials, such as oxygen, may be included in one or more of these layers.Alternatively, other hardmask layers known in the arts may be used depending upon the particular implementation. The hardmask layers maybe formed by CVD, PVD, or by other deposition methods.[00149] It is to be appreciated that the layers and materials described in association with Figure 35 are typically formed on or above an underlying semiconductor substrate or structure, such as underlying device layer(s) of an integrated circuit. In an embodiment, an underlying semiconductor substrate represents a general workpiece object used to manufacture integrated circuits. The semiconductor substrate often includes a wafer or other piece of silicon or another semiconductor material. Suitable semiconductor substrates include, but are not limited to, single crystal silicon, polycrystalline silicon and silicon on insulator (SOI), as well as similar substrates formed of other semiconductor materials. The semiconductor substrate, depending on the stage of manufacture, often includes transistors, integrated circuitry, and the like. The substrate may also include semiconductor materials, metals, dielectrics, dopants, and other materials commonly found in semiconductor substrates. Furthermore, the structure depicted in Figure 35 may be fabricated on underlying lower level interconnect layers.[00150] In another embodiment, EBL cuts may be used to fabricate semiconductor devices, such as PMOS or NMOS devices of an integrated circuit. In one such embodiment, EBL cuts are used to pattern a grating of active regions that are ultimately used to form fin-based or trigate structures. In another such embodiment, EBL cuts are used to pattern a gate layer, such as a poly layer, ultimately used for gate electrode fabrication. As an example of a completed device, Figures 36A and 36B illustrate a cross-sectional view and a plan view (taken along the a-a' axis of the cross-sectional view), respectively, of a non-planarsemiconductor device having a plurality of fins, in accordance with an embodiment of the present invention.[00151] Referring to Figure 36A, a semiconductor structure or device 3600 includes a non-planar active region (e.g., a fin structure including protruding fin portion 3604 and sub-fin region 3605) formed from substrate 3602, and within isolation region 3606. A gate line 3608 is disposed over the protruding portions 3604 of the non-planar active region as well as over a portion of the isolation region 3606. As shown, gate line 3608 includes a gate electrode 3650 and a gate dielectric layer 3652. In one embodiment, gate line 3608 may also include a dielectric cap layer 3654. A gate contact 3614, and overlying gate contact via 3616 are also seen from this perspective, along with an overlying metal interconnect 3660, all of which are disposed in inter-layer dielectric stacks or layers 3670. Also seen from the perspective of Figure 36A, the gate contact 3614 is, in one embodiment, disposed over isolation region 3606, but not over the non-planar active regions.[00152] Referring to Figure 36B, the gate line 3608 is shown as disposed over the protruding fin portions 3604. Source and drain regions 3604A and 3604B of the protruding fin portions 3604 can be seen from this perspective. In one embodiment, the source and drain regions 3604 A and 3604B are doped portions of original material of the protruding fin portions 3604. In another embodiment, the material of the protruding fin portions 3604 is removed and replaced with anothersemiconductor material, e.g., by epitaxial deposition. In either case, the source and drain regions 3604 A and 3604B may extend below the height of dielectric layer 3606, i.e., into the sub-fin region 3605.[00153] In an embodiment, the semiconductor structure or device 3600 is a non-planar device such as, but not limited to, a fin-FET or a tri-gate device. In such an embodiment, a corresponding semiconducting channel region is composed of or is formed in a three-dimensional body. In one such embodiment, the gate electrode stacks of gate lines 3608 surround at least a top surface and a pair of sidewalls of the three-dimensional body.[00154] Embodiments disclosed herein may be used to manufacture a wide variety of different types of integrated circuits and/or microelectronic devices. Examples of such integrated circuits include, but are not limited to, processors, chipset components, graphics processors, digital signal processors, micro- controllers, and the like. In other embodiments, semiconductor memory may be manufactured. Moreover, the integrated circuits or other microelectronic devices may be used in a wide variety of electronic devices known in the arts. For example, in computer systems (e.g., desktop, laptop, server), cellular phones, personal electronics, etc. The integrated circuits may be coupled with a bus and other components in the systems. For example, a processor may be coupled by one or more buses to a memory, a chipset, etc. Each of the processor, the memory, and the chipset, may potentially be manufactured using the approaches disclosed herein.[00155] Figure 37 illustrates a computing device 3700 in accordance with one implementation of the invention. The computing device 3700 houses a board 3702. The board 3702 may include a number of components, including but not limited to a processor 3704 and at least one communication chip 3706. The processor 3704 is physically and electrically coupled to the board 3702. In some implementations the at least one communication chip 3706 is also physically and electrically coupled to the board 3702. In further implementations, the communication chip 3706 is part of the processor 3704.[00156] Depending on its applications, computing device 3700 may include other components that may or may not be physically and electrically coupled to the board 3702. These other components include, but are not limited to, volatile memory (e.g., DRAM), non-volatile memory (e.g., ROM), flash memory, a graphics processor, a digital signal processor, a crypto processor, a chipset, an antenna, a display, a touchscreen display, a touchscreen controller, a battery, an audio codec, a video codec, a power amplifier, a global positioning system (GPS) device, a compass, an accelerometer, a gyroscope, a speaker, a camera, and a mass storage device (such as hard disk drive, compact disk (CD), digital versatile disk (DVD), and so forth).[00157] The communication chip 3706 enables wireless communications for the transfer of data to and from the computing device 3700. The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a non-solid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they might not. The communication chip 3706 may implement any of a number of wireless standards or protocols, including but not limited to Wi- Fi (IEEE 802.11 family), WiMAX (IEEE 802.16 family), IEEE 802.20, long term evolution (LTE), Ev-DO, HSPA+, HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth, derivatives thereof, as well as any other wireless protocols that are designated as 3G, 4G, 5G, and beyond. The computing device 3700 may include a plurality of communication chips 3706. For instance, a first communication chip 3706 may be dedicated to shorter range wirelesscommunications such as Wi-Fi and Bluetooth and a second communication chip 3706 may be dedicated to longer range wireless communications such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, Ev-DO, and others.[00158] The processor 3704 of the computing device 3700 includes an integrated circuit die packaged within the processor 3704. In some implementations of the invention, the integrated circuit die of the processor includes one or more structures fabricated using CEBL, in accordance with implementations of embodiments of the invention. The term "processor" may refer to any device or portion of a device that processes electronic data from registers and/or memory to transform that electronic data into other electronic data that may be stored in registers and/or memory.[00159] The communication chip 3706 also includes an integrated circuit die packaged within the communication chip 3706. In accordance with another implementation of embodiments of the invention, the integrated circuit die of the communication chip includes one or more structures fabricated using CEBL, in accordance with implementations of embodiments of the invention.[00160] In further implementations, another component housed within the computing device 3700 may contain an integrated circuit die that includes one or more structures fabricated using CEBL, in accordance with implementations of embodiments of the invention.[00161] In various implementations, the computing device 3700 may be a laptop, a netbook, a notebook, an ultrabook, a smartphone, a tablet, a personal digital assistant (PDA), an ultra mobile PC, a mobile phone, a desktop computer, a server, a printer, a scanner, a monitor, a set-top box, an entertainment control unit, a digital camera, a portable music player, or a digital video recorder. In further implementations, the computing device 3700 may be any other electronic device that processes data.[00162] Embodiments of the present invention may be provided as a computer program product, or software, that may include a machine-readable medium having stored thereon instructions, which may be used to program a computer system (or other electronic devices) to perform a process according to embodiments of the present invention. In one embodiment, the computer system is coupled with an ebeam tool such as described in association with Figure 4 and/or Figures 24A-24C. A machine-readable medium includes any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer). For example, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium (e.g., read only memory ("ROM"), random access memory ("RAM"), magnetic disk storage media, optical storage media, flash memory devices, etc.), a machine (e.g., computer) readabletransmission medium (electrical, optical, acoustical or other form of propagated signals (e.g., infrared signals, digital signals, etc.)), etc. [00163] Figure 38 illustrates a diagrammatic representation of a machine in the exemplary form of a computer system 3800 within which a set of instructions, for causing the machine to perform any one or more of the methodologies described herein (such as end-point detection), may be executed. In alternative embodiments, the machine may be connected (e.g., networked) to other machines in a Local Area Network (LAN), an intranet, an extranet, or the Internet. The machine may operate in the capacity of a server or a client machine in a client- server networkenvironment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may be a personal computer (PC), a tablet PC, a set- top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term "machine" shall also be taken to include any collection of machines (e.g., computers) that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies described herein.[00164] The exemplary computer system 3800 includes a processor 3802, a main memory 3804 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), a static memory 3806 (e.g., flash memory, static random access memory (SRAM), etc.), and a secondary memory 3818 (e.g., a data storage device), which communicate with each other via a bus 3830.[00165] Processor 3802 represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processor 3802 may be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLrW) microprocessor, processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processor 3802 may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. Processor 3802 is configured to execute the processing logic 3826 for performing the operations described herein.[00166] The computer system 3800 may further include a network interface device 3808. The computer system 3800 also may include a video display unit 3810 (e.g., a liquid crystal display (LCD), a light emitting diode display (LED), or a cathode ray tube (CRT)), an alphanumeric input device 3812 (e.g., a keyboard), a cursor control device 3814 (e.g., a mouse), and a signal generation device 3816 (e.g., a speaker).[00167] The secondary memory 3818 may include a machine-accessible storage medium (or more specifically a computer-readable storage medium) 3832 on which is stored one or more sets of instructions (e.g., software 3822) embodying any one or more of the methodologies or functions described herein. The software 3822 may also reside, completely or at least partially, within the main memory 3804 and/or within the processor 3802 during execution thereof by the computer system 3800, the main memory 3804 and the processor 3802 also constituting machine- readable storage media. The software 3822 may further be transmitted or received over a network 3820 via the network interface device 3808.[00168] While the machine-accessible storage medium 3832 is shown in an exemplary embodiment to be a single medium, the term "machine-readable storage medium" should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term "machine-readable storage medium" shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present invention. The term "machine- readable storage medium" shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media.[00169] Implementations of embodiments of the invention may be formed or carried out on a substrate, such as a semiconductor substrate. In oneimplementation, the semiconductor substrate may be a crystalline substrate formed using a bulk silicon or a silicon-on-insulator substructure. In other implementations, the semiconductor substrate may be formed using alternate materials, which may or may not be combined with silicon, that include but are not limited to germanium, indium antimonide, lead telluride, indium arsenide, indium phosphide, gallium arsenide, indium gallium arsenide, gallium antimonide, or other combinations of group ni-V or group IV materials. Although a few examples of materials from which the substrate may be formed are described here, any material that may serve as a foundation upon which a semiconductor device may be built falls within the spirit and scope of the present invention.[00170] A plurality of transistors, such as metal-oxide-semiconductor field- effect transistors (MOSFET or simply MOS transistors), may be fabricated on the substrate. In various implementations of the invention, the MOS transistors may be planar transistors, nonplanar transistors, or a combination of both. Nonplanar transistors include FinFET transistors such as double-gate transistors and tri-gate transistors, and wrap-around or all-around gate transistors such as nanoribbon and nanowire transistors. Although the implementations described herein may illustrate only planar transistors, it should be noted that the invention may also be carried out using nonplanar transistors.[00171] Each MOS transistor includes a gate stack formed of at least two layers, a gate dielectric layer and a gate electrode layer. The gate dielectric layer may include one layer or a stack of layers. The one or more layers may include silicon oxide, silicon dioxide (S1O2) and/or a high-k dielectric material. The high-k dielectric material may include elements such as hafnium, silicon, oxygen, titanium, tantalum, lanthanum, aluminum, zirconium, barium, strontium, yttrium, lead, scandium, niobium, and zinc. Examples of high-k materials that may be used in the gate dielectric layer include, but are not limited to, hafnium oxide, hafnium silicon oxide, lanthanum oxide, lanthanum aluminum oxide, zirconium oxide, zirconium silicon oxide, tantalum oxide, titanium oxide, barium strontium titanium oxide, barium titanium oxide, strontium titanium oxide, yttrium oxide, aluminum oxide, lead scandium tantalum oxide, and lead zinc niobate. In some embodiments, an annealing process may be carried out on the gate dielectric layer to improve its quality when a high-k material is used.[00172] The gate electrode layer is formed on the gate dielectric layer and may consist of at least one P-type workfunction metal or N-type workfunction metal, depending on whether the transistor is to be a PMOS or an NMOS transistor. In some implementations, the gate electrode layer may consist of a stack of two or more metal layers, where one or more metal layers are workfunction metal layers and at least one metal layer is a fill metal layer.[00173] For a PMOS transistor, metals that may be used for the gate electrode include, but are not limited to, ruthenium, palladium, platinum, cobalt, nickel, and conductive metal oxides, e.g., ruthenium oxide. A P-type metal layer will enable the formation of a PMOS gate electrode with a workfunction that is between about 4.9 eV and about 5.2 eV. For an NMOS transistor, metals that may be used for the gate electrode include, but are not limited to, hafnium, zirconium, titanium, tantalum, aluminum, alloys of these metals, and carbides of these metals such as hafnium carbide, zirconium carbide, titanium carbide, tantalum carbide, and aluminum carbide. An N-type metal layer will enable the formation of an NMOS gate electrode with a workfunction that is between about 3.9 eV and about 4.2 eV.[00174] In some implementations, the gate electrode may consist of a "U"- shaped structure that includes a bottom portion substantially parallel to the surface of the substrate and two sidewall portions that are substantially perpendicular to the top surface of the substrate. In another implementation, at least one of the metal layers that form the gate electrode may simply be a planar layer that is substantially parallel to the top surface of the substrate and does not include sidewall portions substantially perpendicular to the top surface of the substrate. In furtherimplementations of the invention, the gate electrode may consist of a combination of U-shaped structures and planar, non-U-shaped structures. For example, the gate electrode may consist of one or more U-shaped metal layers formed atop one or more planar, non-U-shaped layers.[00175] In some implementations of the invention, a pair of sidewall spacers may be formed on opposing sides of the gate stack that bracket the gate stack. The sidewall spacers may be formed from a material such as silicon nitride, silicon oxide, silicon carbide, silicon nitride doped with carbon, and silicon oxynitride. Processes for forming sidewall spacers are well known in the art and generally include deposition and etching process steps. In an alternate implementation, a plurality of spacer pairs may be used, for instance, two pairs, three pairs, or four pairs of sidewall spacers may be formed on opposing sides of the gate stack.[00176] As is well known in the art, source and drain regions are formed within the substrate adjacent to the gate stack of each MOS transistor. The source and drain regions are generally formed using either an implantation/diffusion process or an etching/deposition process. In the former process, dopants such as boron, aluminum, antimony, phosphorous, or arsenic may be ion-implanted into the substrate to form the source and drain regions. An annealing process that activates the dopants and causes them to diffuse further into the substrate typically follows the ion implantation process. In the latter process, the substrate may first be etched to form recesses at the locations of the source and drain regions. An epitaxial deposition process may then be carried out to fill the recesses with material that is used to fabricate the source and drain regions. In some implementations, the source and drain regions may be fabricated using a silicon alloy such as silicon germanium or silicon carbide. In some implementations the epitaxially deposited silicon alloy may be doped in situ with dopants such as boron, arsenic, or phosphorous. In further embodiments, the source and drain regions may be formed using one or more alternate semiconductor materials such as germanium or a group ΠΙ-V material or alloy. And in further embodiments, one or more layers of metal and/or metal alloys may be used to form the source and drain regions.[00177] One or more interlayer dielectrics (ILD) are deposited over the MOS transistors. The ILD layers may be formed using dielectric materials known for their applicability in integrated circuit structures, such as low-k dielectric materials. Examples of dielectric materials that may be used include, but are not limited to, silicon dioxide (S1O2), carbon doped oxide (CDO), silicon nitride, organic polymers such as perfluorocyclobutane or polytetrafluoroethylene, fluorosilicate glass (FSG), and organosilicates such as silsesquioxane, siloxane, or organosilicate glass. The ILD layers may include pores or air gaps to further reduce their dielectric constant.[00178] Figure 39 illustrates an interposer 3900 that includes one or more embodiments of the invention. The interposer 3900 is an intervening substrate used to bridge a first substrate 3902 to a second substrate 3904. The first substrate 3902 may be, for instance, an integrated circuit die. The second substrate 3904 may be, for instance, a memory module, a computer motherboard, or another integrated circuit die. Generally, the purpose of an interposer 3900 is to spread a connection to a wider pitch or to reroute a connection to a different connection. For example, an interposer 3900 may couple an integrated circuit die to a ball grid array (BGA) 3906 that can subsequently be coupled to the second substrate 3904. In someembodiments, the first and second substrates 3902/3904 are attached to opposing sides of the interposer 3900. In other embodiments, the first and second substrates 3902/3904 are attached to the same side of the interposer 3900. And in further embodiments, three or more substrates are interconnected by way of the interposer 3900.[00179] The interposer 3900 may be formed of an epoxy resin, a fiberglass- reinforced epoxy resin, a ceramic material, or a polymer material such as polyimide. In further implementations, the interposer may be formed of alternate rigid or flexible materials that may include the same materials described above for use in a semiconductor substrate, such as silicon, germanium, and other group ΙΠ-V and group IV materials.[00180] The interposer may include metal interconnects 3908 and vias 3910, including but not limited to through- silicon vias (TSVs) 3912. The interposer 3900 may further include embedded devices 3914, including both passive and active devices. Such devices include, but are not limited to, capacitors, decoupling capacitors, resistors, inductors, fuses, diodes, transformers, sensors, and electrostatic discharge (ESD) devices. More complex devices such as radio-frequency (RF) devices, power amplifiers, power management devices, antennas, arrays, sensors, and MEMS devices may also be formed on the interposer 3900.[00181] In accordance with embodiments of the invention, apparatuses or processes disclosed herein may be used in the fabrication of interposer 3900.[00182] Figure 40 illustrates a computing device 4000 in accordance with one embodiment of the invention. The computing device 4000 may include a number of components. In one embodiment, these components are attached to one or more motherboards. In an alternate embodiment, these components are fabricated onto a single system-on-a-chip (SoC) die rather than a motherboard. The components in the computing device 4000 include, but are not limited to, an integrated circuit die 4002 and at least one communication chip 4008. In some implementations the communication chip 4008 is fabricated as part of the integrated circuit die 4002. The integrated circuit die 4002 may include a CPU 4004 as well as on-die memory 4006, often used as cache memory, that can be provided by technologies such as embedded DRAM (eDRAM) or spin-transfer torque memory (STTM or STTM- RAM).[00183] Computing device 4000 may include other components that may or may not be physically and electrically coupled to the motherboard or fabricated within an SoC die. These other components include, but are not limited to, volatile memory 4010 (e.g., DRAM), non-volatile memory 4012 (e.g., ROM or flash memory), a graphics processing unit 4014 (GPU), a digital signal processor 4016, a crypto processor 4042 (a specialized processor that executes cryptographic algorithms within hardware), a chipset 4020, an antenna 4022, a display or a touchscreen display 4024, a touchscreen controller 4026, a battery 4029 or other power source, a power amplifier (not shown), a global positioning system (GPS) device 4028, a compass 4030, a motion coprocessor or sensors 4032 (that may include an accelerometer, a gyroscope, and a compass), a speaker 4034, a camera 4036, user input devices 4038 (such as a keyboard, mouse, stylus, and touchpad), and a mass storage device 4040 (such as hard disk drive, compact disk (CD), digital versatile disk (DVD), and so forth).[00184] The communications chip 4008 enables wireless communications for the transfer of data to and from the computing device 4000. The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a non-solid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they might not. The communication chip 4008 may implement any of a number of wireless standards or protocols, including but not limited to Wi- Fi (IEEE 802.11 family), WiMAX (IEEE 802.16 family), IEEE 802.20, long term evolution (LTE), Ev-DO, HSPA+, HSDPA+, HSUPA+, EDGE, GSM, GPRS,CDMA, TDMA, DECT, Bluetooth, derivatives thereof, as well as any other wireless protocols that are designated as 3G, 4G, 5G, and beyond. The computing device 4000 may include a plurality of communication chips 4008. For instance, a first communication chip 4008 may be dedicated to shorter range wirelesscommunications such as Wi-Fi and Bluetooth and a second communication chip 4008 may be dedicated to longer range wireless communications such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, Ev-DO, and others.[00185] The processor 4004 of the computing device 4000 includes one or more structures fabricated using CEBL, in accordance with implementations of embodiments of the invention. The term "processor" may refer to any device or portion of a device that processes electronic data from registers and/or memory to transform that electronic data into other electronic data that may be stored in registers and/or memory.[00186] The communication chip 4008 may also include one or more structures fabricated using CEBL, in accordance with implementations of embodiments of the invention.[00187] In further embodiments, another component housed within the computing device 4000 may contain one or more structures fabricated using CEBL, in accordance with implementations of embodiments of the invention.[00188] In various embodiments, the computing device 4000 may be a laptop computer, a netbook computer, a notebook computer, an ultrabook computer, a smartphone, a tablet, a personal digital assistant (PDA), an ultra mobile PC, a mobile phone, a desktop computer, a server, a printer, a scanner, a monitor, a set-top box, an entertainment control unit, a digital camera, a portable music player, or a digital video recorder. In further implementations, the computing device 4000 may be any other electronic device that processes data.[00189] The above description of illustrated implementations of embodiments of the invention, including what is described in the Abstract, is not intended to be exhaustive or to limit the invention to the precise forms disclosed. While specific implementations of, and examples for, the invention are described herein for illustrative purposes, various equivalent modifications are possible within the scope of the invention, as those skilled in the relevant art will recognize.[00190] These modifications may be made to the invention in light of the above detailed description. The terms used in the following claims should not be construed to limit the invention to the specific implementations disclosed in the specification and the claims. Rather, the scope of the invention is to be determined entirely by the following claims, which are to be construed in accordance with established doctrines of claim interpretation.[00191] In an embodiment, a layout for a metallization layer of an integrated circuit includes a first region having a plurality of unidirectional lines of a first width and a first pitch and parallel with a first direction. The layout also includes a second region having a plurality of unidirectional lines of a second width and a second pitch and parallel with the first direction, the second width and the second pitch different than the first width and the first pitch, respectively. The layout also includes a third region having a plurality of unidirectional lines of a third width and a third pitch and parallel with the first direction, the third width and the third pitch different than the first and second widths and different than the first and second pitches.[00192] In one embodiment, in a second direction orthogonal to the first direction, the plurality of unidirectional lines of the second region do not overlap with the plurality of unidirectional lines of the first region, and the plurality of unidirectional lines of the third region do not overlap with the plurality of unidirectional lines of the first region or with the plurality of unidirectional lines of the second region.[00193] In one embodiment, in a second direction orthogonal to the first direction, a portion of the plurality of unidirectional lines of the second region overlap with the plurality of unidirectional lines of the first region.[00194] In one embodiment, the plurality of unidirectional lines of the second region is interdigitated with the plurality of unidirectional lines of the first region.[00195] In one embodiment, the second width is 1.5 times the first width and the second pitch is 1.5 times the first pitch, and the third width is 3 times the first width and the third pitch is 3 times the first pitch.[00196] In one embodiment, the first region is a logic region, the second region is an analog/IO region, and the third region is an SRAM region.[00197] In one embodiment, none of the first, second or third regions the layout includes lines having jogs, orthogonal direction lines, or hooks. [00198] In an embodiment, a metallization layer of an integrated circuit includes a first region having a plurality of unidirectional wires of a first width and a first pitch and parallel with a first direction. The metallization layer also includes a second region having a plurality of unidirectional wires of a second width and a second pitch and parallel with the first direction, the second width and the second pitch different than the first width and the first pitch, respectively. The metallization layer also includes a third region having a plurality of unidirectional wires of a third width and a third pitch and parallel with the first direction, the third width and the third pitch different than the first and second widths and different than the first and second pitches.[00199] In one embodiment, in a second direction orthogonal to the first direction, the plurality of unidirectional wires of the second region do not overlap with the plurality of unidirectional wires of the first region, and the plurality of unidirectional wires of the third region do not overlap with the plurality of unidirectional wires of the first region or with the plurality of unidirectional wires of the second region.[00200] In one embodiment, in a second direction orthogonal to the first direction, a portion of the plurality of unidirectional wires of the second region overlap with the plurality of unidirectional wires of the first region.[00201] In one embodiment, the plurality of unidirectional wires of the second region is interdigitated with the plurality of unidirectional wires of the first region.[00202] In one embodiment, the second width is 1.5 times the first width and the second pitch is 1.5 times the first pitch, and the third width is 3 times the first width and the third pitch is 3 times the first pitch.[00203] In one embodiment, the first region is a logic region, the second region is an analog/IO region, and the third region is an SRAM region.[00204] In one embodiment, none of the first, second or third regions the layout includes wires having jogs, orthogonal direction wires, or hooks.[00205] In an embodiment, a method of forming a pattern for asemiconductor structure involves forming a pattern of lines above a substrate. The pattern of lines includes a first region having a plurality of unidirectional lines of a first width and a first pitch and parallel with a first direction. The pattern of lines also includes a second region having a plurality of unidirectional lines of a second width and a second pitch and parallel with the first direction, the second width and the second pitch different than the first width and the first pitch, respectively. The pattern of lines also includes a third region having a plurality of unidirectional lines of a third width and a third pitch and parallel with the first direction, the third width and the third pitch different than the first and second widths and different than the first and second pitches. The method also involves aligning the substrate in an e- beam tool to provide the pattern of lines parallel with a scan direction of the e-beam tool, the scan direction orthogonal to the first direction. The method also involves forming a pattern of cuts in or above the pattern of lines to provide line breaks for the pattern of lines by scanning the substrate along the scan direction.[00206] In one embodiment, forming the pattern of cuts involves using a three beam staggered blanker aperture array.[00207] In one embodiment, forming the pattern of cuts involves using a universal cutter blanker aperture array.[00208] In one embodiment, forming the pattern of cuts involves using a non- universal cutter blanker aperture array.[00209] In one embodiment, forming the pattern of lines involves using a pitch halving or pitch quartering technique.[00210] In one embodiment, forming the pattern of cuts involves exposing regions of a layer of photo-resist material. |
An apparatus for data processing, according to one or more aspects of the disclosure, includes a processing system configured to communicate with at least one of a plurality of reference nodes worn on body parts to obtain body positioning data relating to relative position between the body parts, and provide body tracking based on the body positioning data. The body positioning data relates to ranging and/or angular position between each of the reference nodes and a reference plane defined by one or more of the reference nodes. |
CLAIMS WHAT IS CLAIMED IS: 1. An apparatus for data processing comprising: a processing system configured to communicate with at least one of a plurality of reference nodes worn on body parts to obtain body positioning data relating to relative position between the body parts, and provide body tracking based on the body positioning data, wherein the body positioning data relates to ranging between each of the reference nodes and a reference plane defined by one or more of the reference nodes. 2. The apparatus of claim 1, wherein the apparatus comprises a game console, and wherein the processing system is further configured to support one or more gaming applications by providing body tracking based on the body positioning data. 3. The apparatus of claim 1, wherein the apparatus further comprises means for supporting the apparatus on the body. 4. The apparatus of claim 1, further comprising at least one sensor configured to generate reference data relating to relative position of at least one body part in relation to the apparatus. 5. The apparatus of claim 1, wherein the body positioning data further relates to angular position between each of the reference nodes and the reference plane. 6. The apparatus of claim 1, wherein the processing system is configured to communicate with at least one of the reference nodes when worn on body parts of multiple users to obtain the body positioning data. 7. The apparatus of claim 1, wherein the body positioning data comprises one or more physical dimensions of the body. 8. The apparatus of claim 1, wherein the body positioning data comprises one or more movements of the body. 9. The apparatus of claim 1, wherein the body positioning data comprises one or more physical dimensions of the body, one or more movements of the body, and a relationship between the one or more physical dimensions of the body and the one or more movements of the body. 10. The apparatus of claim 1, wherein the body positioning data comprises one or more physical dimensions of the body, and wherein the processing system is configured to create a historical record of the one or more physical dimensions of the body from the body positioning data. 11. The apparatus of claim 1 , wherein the body positioning data comprises data related to one or more movements of the body, and wherein the processing system is configured to create a historical record of the body positioning data related to the one or more movements of the body . 12. The apparatus of claim 1, wherein the processing system is configured to generate at least a portion of the body positioning data. 13. The apparatus of claim 1, wherein the processing system is configured to receive at least a portion of the body positioning data from the at least one of the reference nodes. 14. A method for data processing comprising: communicating with at least one of a plurality of reference nodes worn on body parts to obtain body positioning data relating to relative position between the body parts; and providing body tracking based on the body positioning data, wherein the body positioning data relates to ranging between each of the reference nodes and a reference plane defined by one or more of the reference nodes. 15. The method of claim 14, wherein the body positioning data further relates to angular position between each of the reference nodes and the reference plane. 16. The method of claim 14, wherein communicating with the at least one of the reference nodes includes communicating with the at least one of the reference nodes when worn on body parts of multiple users to obtain the body positioning data. 17. The method of claim 14, wherein the body positioning data comprises one or more physical dimensions of the body. 18. The method of claim 14, wherein the body positioning data comprises one or more movements of the body. 19. The method of claim 14, wherein the body positioning data comprises one or more physical dimensions of the body, one or more movements of the body, and a relationship between the one or more physical dimensions of the body and the one or more movements of the body. 20. The method of claim 14, wherein the body positioning data comprises one or more physical dimensions of the body, and wherein providing body tracking comprises creating a historical record of the one or more physical dimensions of the body from the body positioning data. 21. The method of claim 14, wherein the body positioning data comprises data related to one or more movements of the body, and wherein providing body tracking comprises creating a historical record of the body positioning data related to the one or more movements of the body from the body positioning data. 22. The method of claim 14, further comprising generating at least a portion of the body positioning data. 23. The method of claim 14, further comprising receiving at least a portion of the body positioning data from the at least one of the reference nodes. 24. An apparatus for data processing comprising: means for communicating with at least one of a plurality of reference nodes worn on body parts to obtain body positioning data relating to relative position between the body parts; and means for providing body tracking based on the body positioning data, wherein the body positioning data relates to ranging between each of the reference nodes and a reference plane defined by one or more of the reference nodes. 25. The apparatus of claim 24, wherein the apparatus comprises a game console, and wherein the apparatus further comprises means for supporting one or more gaming applications. 26. The apparatus of claim 24, wherein the apparatus further comprises means for supporting the apparatus on the body. 27. The apparatus of claim 24, further comprising a sensing means for generating reference data relating to relative position of at least one body part. 28. The apparatus of claim 24, wherein the body positioning data further relates to angular position between each of the reference nodes and the reference plane. 29. The apparatus of claim 24, wherein the means for communicating with the at least one of the reference nodes comprises means for communicating with the at least one of the reference nodes when worn on body parts of multiple users to obtain the body positioning data. 30. The apparatus of claim 24, wherein the body positioning data comprises one or more physical dimensions of the body. 31. The apparatus of claim 24, wherein the body positioning data comprises one or more movements of the body. 32. The apparatus of claim 24, wherein the body positioning data comprises one or more physical dimensions of the body, one or more movements of the body, and a relationship between the one or more physical dimensions of the body and the one or more movements of the body. 33. The apparatus of claim 24, wherein the body positioning data comprises one or more physical dimensions of the body, and wherein the means for providing body tracking is configured to create a historical record of the one or more physical dimensions of the body from the body positioning data. 34. The apparatus of claim 24, wherein the body positioning data comprises data related to one or more movements of the body, and wherein the means for providing body tracking is configured to create a historical record of the body positioning data related to the one or more movements of the body from the body positioning data. 35. The apparatus of claim 24, further comprising means for generating at least a portion of the body positioning data. 36. The apparatus of claim 24, further comprising means for receiving at least a portion of the body positioning data from the at least one of the reference nodes. 37. A computer program product comprising : a computer-readable medium comprising codes executable to cause an apparatus to: communicate with at least one of a plurality of reference nodes worn on body parts to obtain body positioning data relating to relative position between the body parts; and provide body tracking based on the body positioning data, wherein the body positioning data relates to ranging between each of the reference nodes and a reference plane defined by one or more of the reference nodes. 38. A game console comprising: a receiver configured to receive information from a user; and a processing system configured to communicate with at least one of a plurality of reference nodes worn on body parts of the user to obtain body positioning data relating to relative position between the body parts of the user, and provide body tracking of the user based on the body positioning data, wherein the body positioning data relates to ranging between each of the reference nodes and a reference plane defined by one or more of the reference nodes. |
METHOD AND APPARATUS FOR TRACKING ORIENTATION OF A USER CROSS-REFERENCE TO RELATED APPLICATION(S) [0001] This application claims priority to and the benefit of U.S. Provisional Application Serial No. 61/430,007, entitled "Method and Apparatus for Tracking Orientation of a User" and filed on January 5, 2011, the contents of which are hereby incorporated by reference herein in their entirety. BACKGROUND I. Field [0002] The following description relates generally to computer science, and more particularly to a method and an apparatus for tracking orientation of a user. II. Background [0003] Some conventional body tracking techniques are deficient with respect to accuracy, interference, and set-up. These conventional body tracking techniques require controlled and fixed environments, multiple sensors on-and-off the body, and cameras. Strictly controlling the environment and needing a fixed location with a system of motion capture sensors or cameras surrounding the body being tracked significantly restricts a trackable area. These conventional body tracking techniques generally need a large number of cameras, motion capture sensors, or magnets surrounding the body to generate a 3-Dimensional environment for user orientation in space. These conventional body tracking techniques can be costly to implement and are difficult to implement properly, which leaves non-professional companies or individuals without an option to utilize body tracking technology. Therefore, there exists a need to improve body tracking techniques. SUMMARY [0004] The following presents a simplified summary of one or more aspects of methods and apparatuses to provide a basic understanding of such methods and apparatuses. This summary is not an extensive overview of all contemplated aspects of such methods and apparatuses, and is intended to neither identify key or critical elements of all aspects nor delineate the scope of any or all methods or apparatuses. Its sole purpose is to some concepts of one or more aspects in a simplified form as a prelude to the more detailed description that is presented herein. [0005] According to one aspect of the disclosure, an apparatus for tracking orientation of a user includes a processing system configured to communicate with at least one of a plurality of reference nodes worn on body parts to obtain body positioning data relating to relative position between the body parts, and provide body tracking based on the body positioning data. The body positioning data relates to ranging and/or angular position between each of the reference nodes and a reference plane defined by one or more of the reference nodes. [0006] According to one aspect of the disclosure, a method for data processing includes communicating with at least one of a plurality of reference nodes worn on body parts to obtain body positioning data relating to relative position between the body parts and providing body tracking based on the body positioning data. The body positioning data relates to ranging and/or angular position between each of the reference nodes and a reference plane defined by one or more of the reference nodes. [0007] According to one aspect of the disclosure, an apparatus for data processing includes means for communicating with at least one of a plurality of reference nodes worn on body parts to obtain body positioning data relating to relative position between the body parts and means for providing body tracking based on the body positioning data. The body positioning data relates to ranging and/or angular position between each of the reference nodes and a reference plane defined by one or more of the reference nodes. [0008] According to one aspect of the disclosure, a computer program product includes a computer-readable medium comprising codes executable to cause an apparatus to communicate with at least one of a plurality of reference nodes worn on body parts to obtain body positioning data relating to relative position between the body parts and provide body tracking based on the body positioning data. The body positioning data relates to ranging and/or angular position between each of the reference nodes and a reference plane defined by one or more of the reference nodes. [0009] According to one aspect of the disclosure, a game console for data processing includes a receiver configured to receive information from a user and a processing system configured to communicate with at least one of a plurality of reference nodes worn on body parts of the user to obtain body positioning data relating to relative position between the body parts of the user, and provide body tracking of the user based on the body positioning data. The body positioning data relates to ranging and/or angular position between each of the reference nodes and a reference plane defined by one or more of the reference nodes. [0010] To the accomplishment of the foregoing and related ends, the one or more aspects of the various methods and apparatuses presented throughout this disclosure comprise the features hereinafter fully described and particularly pointed out in the claims. The following description and the annexed drawings set forth in detail provide certain illustrative aspects of the various methods and apparatuses. These aspects are indicative, however, of but a few of the various ways in which the principles of such methods and apparatuses may be employed and the described aspects are intended to include all variations of such methods and apparatuses and their equivalents. BRIEF DESCRIPTION OF THE DRAWINGS [0011] FIG. 1 A shows an example of an apparatus and a separate remote system, in accordance with aspects of the disclosure. [0012] FIG. IB shows an example of the apparatus and the separate remote system including one or more reference nodes, in accordance with aspects of the disclosure. [0013] FIGS. 1C-1D show examples of the apparatus with a processing system and the separate remote system with reference nodes, in accordance with aspects of the disclosure. [0014] FIG. 2 A shows an example of a process for scaling gesture recognition, in accordance with aspects of the disclosure. [0015] FIG. 2B shows an example of a process for tracking orientation, in accordance with aspects of the disclosure. [0016] FIG. 2C shows an example of a flow diagram for processing data and/or information related to tracking user orientation, in accordance with aspects of the disclosure. [0017] FIGS. 3A-3D show examples of node maps, in accordance with aspects of the disclosure. [0018] FIGS. 3E-3G show examples of node maps including a reference plane, in accordance with aspects of the disclosure. [0019] FIG. 3H shows various examples of equations for tracking orientation, in accordance with aspects of the disclosure. [0020] FIGS. 4-5 show examples of apparatuses suitable for implementing aspects of the disclosure. DETAILED DESCRIPTION [0021] Various aspects of methods and apparatuses will be described more fully hereinafter with reference to the accompanying drawings. These methods and apparatuses may, however, be embodied in many different forms and should not be construed as limited to any specific structure or function presented in this disclosure. Rather, these aspects are provided so that this disclosure will be thorough and complete, and will fully convey the scope of these methods and apparatus to those skilled in the art. Based on the teachings herein, one skilled in the art should appreciate that that the scope of the disclosure is intended to cover any aspect of the methods and apparatus disclosed herein, whether implemented independently of or combined with any other aspect of the disclosure. For example, an apparatus may be implemented or a method may be practiced using any number of the aspects set forth herein. In addition, the scope of the disclosure is intended to cover such an apparatus or method which is practiced using other structure, functionality, or structure and functionality in addition to or other than the aspects presented throughout this disclosure herein. It should be understood that any aspect of the disclosure herein may be embodied by one or more elements of a claim. [0022] Several aspects of this disclosure will now be presented with reference to FIG. 1A. FIG. 1A is a conceptual diagram illustrating an example of an apparatus 102 and a remote system 104 that may be separate from the apparatus 102. The apparatus 102 may comprise any node capable of tracking orientation of a user, including by way of example, a computer including a game console. In this aspect, user gestures and/or movements may be utilized to control interactions with applications (e.g., graphical user interface applications including video games) provided by the apparatus 102 to make the user's experience more interactive, and user gestures and/or movements may be scaled according to at least one physical dimension of the user to improve the user's experience. Alternatively, the apparatus 102 may utilize any other node that can remotely track user gestures, movements, and/or orientation, such as, a computer with gesture, movement, and/or orientation recognition capability to reduce or eliminate the need for traditional keyboard and mouse setup, a robotic system capable of gesture, movement, and/or orientation recognition, personal computing device (e.g., laptop, personal computer (PC), personal digital assistant (PDA)), a personal communication device (e.g., mobile phone), an entertainment device (e.g., game console, digital media player, television), sign language recognition systems, facial gesture, movement, and/or orientation recognition systems, or any other suitable node responsive to input methods other than traditional touch, pointing device, and speech. [0023] User gestures may originate from any user body motion, movement, pose, and/or change in orientation. User gestures may include full body motion, movement, pose, and/or change in orientation, and user gestures may include any body part motion, movement, pose, and/or change in orientation. For example, user gestures may include hand movements (e.g., punch, chop, lift, etc.), foot movements (e.g., kick, knee bend, etc.), head movements (e.g., head shake, nod, etc.), and/or body movements (e.g., jumping, kneeling, lying down, etc.). [0024] The remote system 104 may be any suitable system capable of communicating with the apparatus 102 to support tracking orientation of a user including user gesture, motion, and/or movement recognition functionality. In at least one aspect, the remote system 104 may be configured to provide at least one input for gesture scaling by the apparatus 102 to improve the gesture accuracy of the user during the operation of the apparatus 102. By way of example, the gesture input required by a user to trigger an action or enter a command may be scaled based on at least one physical dimension and/or at least one movement of the user with the remote system 104. [0025] Referring to FIG. 1A, the apparatus 102 is shown with a wireless connection to the remote system 104. However, in other aspects, the apparatus 102 may have a wired connection to the remote system 104. In the case of a wireless connection, any suitable radio technology or wireless protocol may be used. By way of example, the apparatus 102 and the remote system 104 may be configured to support wireless communications using Ultra-Wideband (UWB) technology including Qualcomm Personal Area Network Low power technology (PeANUT), 802.1 In, etc. UWB technology utilizes high speed short range communications and may be defined as any radio technology having a spectrum that occupies a bandwidth greater than 20 percent of the center frequency, or a bandwidth of at least 500 MHz. Alternatively, the apparatus 102 and remote system 104 may be configured to support Bluetooth, Two-Way Infrared Protocol (TWIRP), or some other suitable wireless protocol. [0026] In another case of a wireless connection, another suitable radio technology or wireless protocol that may be used may include a peer-to-peer network. In one example, peer-to-peer networks may utilize mesh-based access technologies including UWB, PeANUT, 802.11η, etc. A mesh-based network may utilize orthogonal frequency division multiplexing (OFDM) for the physical layer. The peer-to-peer network may be a short range, low power, and high bandwidth network. [0027] In one implementation of the remote system 104, one or more sensors may be utilized and configured to provide one or more signals to the apparatus 102. Generally, a sensor is a device configured to measure or capture a physical quantity (e.g., motion, movement, acceleration, orientation, distance, range, height, length, etc.) and convert the physical quantity into a signal that can be transmitted to and processed by the apparatus 102. The one or more sensors may comprise one or more remote accelerometers, remote ranging sensors, remote gyros, or any other suitable sensor, or any combination thereof. [0028] In another implementation of the remote system 104, a belt or harness may be utilized. The belt or harness may be wearable by a user. The belt or harness may include one or more sensors for tracking gestures, motion, movement, and/or changes in orientation of the user with or without regard to the location of the user relative to the apparatus 102. The belt or harness may include one or more sensors for measuring or capturing a physical dimension of the user to determine gestures, motion, movement, and/or changes in orientation of the user with or without regard to the location of the user relative to the apparatus 102. The one or more sensors may comprise one or more remote accelerometers, remote ranging sensors, remote gyros, or any other suitable sensor, or any combination thereof. The apparatus 102 may comprise means for supporting the apparatus 102 on a body so that the apparatus 102 is wearable by a user and may be worn on the body of a user. The means for supporting the apparatus 102 on the body of a user may include some type of fastener, clip, snap, button, adhesive, etc., and/or the apparatus 102 may be supported by and/or attached to clothing, a belt, or a harness. Accordingly, in one example, the apparatus 102 may be configured to communicate with the sensors of the belt or harness (remote system 104) to measure, capture, and/or track physical dimensions, gestures, motion, movement, and/or changes in orientation of the user. In another implementation of the remote system 104, a mat or platform may be utilized. The mat or platform may be positioned on the ground to establish ground level relative to a user. The mat or platform may include one or more sensors for tracking gestures, motion, movement, and/or changes in orientation of the user with or without regard to the location of the user relative to the apparatus 102. The mat or platform may include one or more sensors for measuring or capturing a physical dimension of the user to determine gestures, motion, movement, and/or changes in orientation of the user with or without regard to the location of the user relative to the apparatus 102. The one or more sensors may comprise one or more remote accelerometers, remote ranging sensors, remote gyros, or any other suitable sensor, or any combination thereof. As previously described, the apparatus 102 may comprise means for supporting the apparatus 102 on the body of a user, and in this instance, the apparatus 102 may be configured to communicate with the sensors of the mat or platform (remote system 104) to measure, capture, and/or track physical dimensions, gestures, motion, movement, and/or changes in orientation of the user. As previously described, the apparatus 102 may include means for supporting the apparatus 102 on the body of a user, such as some type of fastener, clip, snap, button, adhesive, etc., and/or the apparatus 102 may be supported by and/or attached to clothing, a belt, or harness. [0030] Referring to FIG. IB, a conceptual diagram illustrates an example of the apparatus 102 and the remote system 104 comprising, in one implementation, a system of one or more reference nodes 106i, 1062, . . ., 106n, where n refers to any integer. Each reference node 106i, 1062, . . ., 106n may be any suitable node capable of communicating with the apparatus 102 to support tracking orientation of a user including user dimension, gesture, motion, and/or movement recognition functionality. Each reference node 106i, 1062, . . ., 106n is configured to communicate with each other node 106i, 1062, . . ., 106n and the apparatus 102 to support tracking orientation of a user including user dimension, gesture, motion, and/or movement recognition functionality. In at least one implementation of the system 104, each reference node 106i, 1062, . . ., 106n may be configured to provide at least one input for gesture scaling by the apparatus 102 to improve the gesture accuracy of the user during the operation of the apparatus 102. The apparatus 102 is shown with a wireless connection to each reference node 106i, 1062, . . ., 106n. However, in other implementations, the apparatus 102 may have a wired connection to one or more reference nodes 106i, 1062, . . ., 106n. [0031] In one implementation of the system 104, each reference node 106i, 1062, . . ., 106n comprises at least one remote sensor configured to provide at least one signal to the apparatus 102. The signal may include sensing data, sensing parameter data, raw data, reference data, and/or any other relevant data. The signal may include at least a portion of body positioning data, physical dimensions data, body movement data, body tracking data, and/or various other relevant data. Each remote sensor is configured to measure or capture a physical quantity (e.g., physical dimension, motion, movement, acceleration, orientation, distance, range, height, length, etc.) and convert the physical quantity into at least one signal that can be transmitted to and processed by the apparatus 102. Each remote sensor comprises at least one of a remote accelerometer, a remote ranging sensor, and a remote gyro. [0032] Referring to FIG. 1C, a conceptual diagram illustrates an example of the apparatus 102 and the remote system 104 comprising the one or more reference nodes 106i, 1062, . . ., 106n. The apparatus 102 comprises a processing system 105 configured to communicate with each of the reference nodes 106i, 1062, . . ., 106n that may be worn on body parts of a user to obtain body positioning data relating to relative position between the body parts of the user and provide body tracking based on the body positioning data. The body positioning data may relate to ranging and/or angular position between each of the reference nodes 106i, 1062, . . ., 106n and a reference plane defined by one or more of the reference nodes IO61, 1062, . . ., 106n, which is described herein. The body positioning data may include data related to one or more physical dimensions of the body of a user and/or data related to one or more movements of the body of the user. The body positioning data may include data related to a relationship between one or more physical dimensions of the body of the user and one or more movements of the body of the user. [0033] In one aspect of the disclosure, the apparatus 102 may comprise a game console, and the processing system 105 may be configured to support one or more gaming applications executable by the game console. As such, the apparatus 102 comprises means for supporting one or more gaming applications. As previously described, the apparatus 102 may comprise means for supporting the apparatus 102 on the body of a user. The apparatus 102 may comprise at least one sensor 108 configured to generate reference data (i.e., sensing parameters) relating to relative position of at least one body part in a manner as previously described in reference to the reference nodes IO61, 1062, . . ., 106n. The apparatus 102 communicates with the sensor 108 and/or each reference node IO61, 1062, . . ., 106n to receive data and information including sensing signals and/or sensing parameters that may include sensing data, sensing parameter data, raw data, reference data, and/or any other type of relevant data. The data, sensing signals, and/or sensing parameters may include a portion of body positioning data, physical dimensions data, body movement data, body tracking data, and/or various other relevant data. In an example, the sensor 108 comprises a sensing means for generating reference data relating to relative position of at least one body part. [0034] The processing system 105 may obtain body positioning data by computing raw data received from the sensor 108 and/or each reference node IO61, 1062, . . ., 106n. The processing system 105 may obtain body positioning data by receiving at least a portion of body positing data from one or more of the reference node IO61, 1062, . . ., 106n. The processing system 105 may obtain body positioning data by generating at least a portion of the body positing data. The body positioning data may include one or more physical dimensions of the body, one or more movements of the body, and/or a relationship between the one or more physical dimensions of the body and the one or more movements of the body to provide body tracking based on the body positioning data. [0035] The processing system 105 may be configured to determine range and/or angular position between reference nodes 106i, 1062, . . ., 106n with various RF techniques including monitoring signal strength, monitoring signal attenuation, time of flight of a single signal with timing synchronization, round-trip delay, magnetic field sensing, etc. For example, the processing system 105 may be configured to determine range and/or angular position between reference nodes 106i, 1062, . . ., 106n by a round-trip delay of a multiple signals sent to each node 106i, 1062, . . ., 106n and/or round-trip delay of a single signal sent through multiple reference nodes 106i, 1062, . . ., 106n. The body positioning data may include data and information related to ranging and/or angular position between the apparatus 102 and each of the reference nodes 106i, 1062, . . ., 106n to provide body tracking based on the body positioning data. The body positioning data may include data and information related to ranging and/or angular position between each of the reference nodes 106i, 1062, . . ., 106n and a reference plane defined by one or more of the reference nodes 106i, 1062, . . ., 106n to provide body tracking based on the body positioning data. [0036] Referring to FIG. ID, a conceptual diagram illustrates an example of the apparatus 102 and the remote system 104 comprising the one or more reference nodes 106i, 1062, . . ., 106n. The apparatus 102 comprises the processing system 105 configured to communicate with at least one of the reference nodes 106i, 1062, . . ., 106n to obtain body positioning data relating to the relative position of the other reference nodes 106i, 1062, . . ., 106n between the body parts of the user and provide body tracking based on the body positioning data. As described herein, the one or more reference nodes 106i, 1062, . . ., 106n may be worn on body parts of the user. The body positioning data may relate to ranging and/or angular position between each of the reference nodes 106i, 1062, . . ., 106n and a reference plane defined by one or more of the reference nodes 106i, 1062, . . ., 106n, which is described herein. In an example, as shown in FIG. ID, the reference nodes 106i, 1062, . . ., 106n are configured to communicate with each other to transfer body positioning data therebetween, and at least one of the reference nodes, such as reference node 1062, is configured to communicate with the processing system 105 so that the processing system 105 obtains the body positioning data relating to relative position between the body parts of the user. [0037] In one aspect of the disclosure, the reference nodes 106i, 1062, . . ., 106n may be configured to communicate with each other with one type of communication technology, and the processing system 105 may be configured to communicate with one or more of the reference nodes 106i, 1062, . . ., 106n with the same communication technology or another different communication technology. [0038] In accordance with aspects of the disclosure, body tracking may be achieved with visual features, optical markers, mechanically, magnetically, and acoustically. In one example, a visual feature tracking utilizes cameras to capture and recognize visual gestures of a user. This technique utilizes a controlled space, controlled lighting conditions, and sometimes post-processing to track gross visual gestures for one or more bodies. In another example, optical marker tracking utilizes multiple cameras to capture a position of wearable markers, such as reflective or infrared markers. This technique utilizes a controlled space, controlled lighting conditions, and lengthy postprocessing to track gross visual gestures for one or more bodies. The optical marker tracking is different than visual tracking in its ability to capture detailed data and fine gestures. In another example, mechanical tracking utilizes wearable inertial sensors to capture motion and may be worn by a user to track movement. This technique may not need controlled space or light, but frequent re-calibration may be needed. In another example, magnetic tracking utilizes receivers with multiple (e.g., 3) orthogonal coils to measure relative magnetic flux from orthogonal coils on transmitter, receiver, or transmitter static. This technique reduces line of sight problems and provides an orientation of the user in space. In another example, acoustic tracking utilizes a system of wearable receivers to track signals to and from wearable beacons. This technique utilizes at least one off-body beacon to avoid root drift, temperature and humidity calibration, and supports tracking multiple bodies with different frequencies. [0039] In accordance with an aspect of the disclosure, the body tracking apparatus 102 that may be worn is provided and is configured to interface with one or more transmitting and receiving sensors (i.e., reference nodes 106i, 1062, . . ., 106n) worn on a body of a user. The apparatus 102 is configured to track body orientation in space without using an off-body beacon. The user may wear multiple sensors (i.e., reference nodes 106i, I O62, . . ., 106n) around a portion of the body (e.g., waist) to form a reference plane in which other worn sensors (i.e., reference nodes I O61, I O62, . . ., 106n) can have their orientation in reference to the reference plane tracked. The multiple sensors (i.e., reference nodes I O61, 1062, . . ., 106n) may be configured to form the reference plane and are able to track orientation of any other proximate sensor using an equation, e.g., as provided in Fig. 3H. [0040] In accordance with an aspect of the disclosure, the reference plane may be defined on a portion of the user's body (e.g., around the user's waist), and the reference plane may be utilized as an initial orientation input for body tracking. Placing other sensors on different parts of the body may allow 3 -dimensional movement tracking in reference to the defined reference plane. For example, tracking orientation of multiple sensors as a reference plane may be utilized to define an additional remote sensor to a particular body part (e.g., left hand). The apparatus 102 may include a learning module to match the movement being input to body movements in a database or past stored personal data. The apparatus 102 may include a user interface for prompting the user to enter physical body measurements before tracking body positioning. Defined sensors as reference nodes may be utilized to track key body parts, such as head, shoulders, hands, feet, elbows, knees, etc., and defined sensors as reference nodes may provide accurate orientation data in reference to the reference plane sensors worn, e.g., at the user's waist. In one example, defined sensors at a user's feet may provide a ground reference. In another example, defined sensors worn at the user's waist may be arranged to provide useful orientation information for a front, back, left, and/or right side of the user's waist. The apparatus 102 may be configured to reduce reading errors and compensate for a lower than optimal number of readings. The apparatus 102 may utilize a user interface configured to prompt the user to enter body measurements, such as height, arm length, and/or arm span may provide physical dimension data for the apparatus 102 to use when tracking orientation and may reduce tracking errors. [0041] In accordance with an aspect of the disclosure, the apparatus 102 may be configured to interface with multiple positioning sensors worn on a body, such as a user's waist, to generate a mobile 3-Dimensional body tracking frame of reference. The apparatus 102, as provided herein, is configured to track orientation of a body without the user of an off-body beacon or a controlled environment which requires many off- body beacons, sensors, or cameras. In one example, the apparatus 102 may be configured to interface with many sensors for detailed body tracking. The apparatus 102 may be configured to utilize one or more defined sensors to track one or more body parts and/ or obtain body measurement data from the user to scale orientation information to track movement. [0042] In accordance with an aspect of the disclosure, the apparatus 102 may be configured to utilize multiple transmitting and receiving sensors worn at the waist of a user to create a 3 -Dimensional frame of reference and orientation information to track other worn sensors as nodes for other body parts and distances to be related to. The apparatus 102 may be configured to define remote sensors to particular body parts and sides of the body to provide orientation data other than the sensors worn at the waist to further enhance body tracking in reference to orientation in space. The apparatus 102 may be configured to utilize a learning module to track and/or match movement being input to body movements in a database or past stored personal data to enhance body tracking, which may reduce error and assist with an insufficient number of readings to estimate missed movements. The apparatus 102 may be configured to obtain body measurement data from user input to enhance body tracking and/or scaling of body tracking data, which may reduce errors and assist with an insufficient number of readings to estimate missed movements. [0043] In accordance with an aspect of the disclosure, the apparatus 102 may be configured to scale user gestures and/or movements according to at least one physical dimension of the user to improve the user's experience. For example, scaling refers to a linear transformation that alters (i.e., increases or decreases) the reference size of an object or objects by a scale factor that may be similar in all directions. For example, two objects of the same height may be positioned at different distances from a reference point. From the view of the reference point, the object positioned at a greater distance from the reference point may appear smaller, even though the objects are of the same height. Thus, knowing the distance of each object from the reference point and each object's height provides a way to scale the objects in a uniform manner to be judged as the same height without regard of each objects position with respect to the reference point. [0044] The apparatus 102 may be used for scaling gesture recognition to physical dimensions of a user. As described herein, the apparatus 102 is configured to obtain at least one physical dimension of a user and determine a gesture of the user based on the at least one physical dimension without regard to the location of the user relative to the apparatus 102. For example, in one implementation of the apparatus 102, a user may provide gesture input for triggering an action or entering a command may be scaled significantly different for users that vary in physical dimensions, such as height, arm span, etc. For instance, two different users, such as an adult and a child, may attempt to enter a similar command using a gesture input and the necessary movement may be too large for the child or too small for the adult. Therefore, gesture scaling may improve accuracy of user gesture input. The apparatus 102 may achieve gesture scaling by obtaining or determining at least one physical dimension of the user. The at least one physical dimension of the user may include height of the user, arm span of the user, height of a handheld device held at a neutral position, such as arms at the user's side. In some instances, accuracy of a scaling may vary depending on which physical dimension is selected or how calculated if not directly inputted by the user. [0045] In one implementation of the apparatus 102, a physical dimension of a user may be determined by direct user input via a user interface device, such as a handheld device. For example, prompting the user to enter height or length of arm span may provide an accurate representation of scaling necessary for accurate interpretation of gesture input by the user. [0046] In another implementation of the apparatus 102, a physical dimension of a user may be determined by utilizing a system 104 of sensors and prompting user movement. For example, the user may be prompted to touch a body part, such as the user's head, and then touch a baseline reference point, such as the ground, with a handheld device having a mechanical sensing device, such as an accelerometer, gyro, etc. The system 104 of sensors may be configured to identify and record starts and/or stops in movement followed by large movements to be a distance from head to ground. This learning technique may provide an approximation of height as different users may move the device from head to the ground in different manners. Some users may move the handheld device a shortest possible distance, and some users may move the handheld device a longer distance from head to ground. [0047] In another implementation of the apparatus 102, a physical dimension of a user may be determined by utilizing a system 104 of ranging sensors with known physical dimensions, such as height, paired with prompted user movement. For example, if a user stands on a sensing mat having one or more ranging sensors with known height of zero and the user is then prompted to touch their head with a handheld device having a ranging sensor, the height of the user may be calculated in an accurate manner. Height and other physical dimensions may be calculated accurately when the system 104 of multiple ranging sensors (e.g., two or more ranging sensors) is utilized in combination with prompted movement with a handheld device. Alternatively, the system 104 of multiple ranging sensors may include two or more ranging sensors, and/or the system 104 of multiple ranging sensors may or may not be worn or held by the user. [0048] In one implementation of the apparatus 102, user gestures may be obtained and/or determined based on at least one physical dimension of the user and an identified movement, wherein determining a gesture includes calibration of scale based on a relationship between the at least one physical dimension and the at least one movement of the user. This relationship may be defined in a lookup table or calculated by an equation. The apparatus 102 may be configured to improve accuracy of gesture scaling through a learning algorithm that accesses a history of movements performed by the user and adjusts an initial scaling that may be based on the at least one physical dimension. Information related to user physical dimensions, user movements, scaling, calibration of scale, any and all relationships between user physical dimensions and user movements, and history of user physical dimensions and user movements may be stored as part of a computer readable medium. [0049] In another implementation of the apparatus 102, gesture recognition may be scaled to one or more physical dimensions of a user by selectively adjusting movements for particular inputs and/or commands to any physical dimension of the user. Properly adjusted movement parameters may provide an improved gesture input experience for users, may add an element of fairness to gaming applications, may reduce a risk of body strain when stretching farther than a natural range of movement for a user (e.g., forced adaptation of undersized users to large scale reference points), and may reduce the amount of attention necessary for satisfying user parameters when movement is confined to an unnaturally narrow range (e.g., forced adaptation of undersized users to large scale reference points). Information related to user movement parameters along with user physical dimensions, user movements, scaling, calibration of scale, any and all relationships between user physical dimensions and user movements, and history of user physical dimensions and user movements may be stored as part of a computer readable medium. [0050] It should be appreciated that the teachings provided herein may be incorporated into (e.g., implemented within or performed by) various apparatuses (e.g., devices). For example, aspects of the disclosure may be incorporated into a phone (e.g., a cellular phone), a personal data assistant ("PDA"), an entertainment device (e.g., a music or video device), a headset (e.g., headphones, an earpiece, etc.), a microphone, a medical sensing device (e.g., a biometric sensor, a heart rate monitor, a pedometer, an EKG device, a smart bandage, etc.), a user I/O device (e.g., a watch, a remote control, a light switch, a keyboard, a mouse, etc.), an environment sensing device (e.g., a tire pressure monitor), a monitor that may receive data from the medical or environment sensing device, a computer, a processor, a point-of-sale device, an audio/video device, a gaming console, a hearing aid, a set-top box, or any other suitable device. In particular, the teachings provided herein may be incorporated into various gaming apparatuses, consoles, devices, etc., such as Wii™, PlayStation™ or Xbox 360™, or other gaming platforms. The teachings provided herein may be incorporated into remote controls for gaming consoles, such as gaming controllers used with Wii™, PlayStation™ or Xbox 360™, or other gaming platforms, as well as gaming controllers used with personal computers, including tablets, computing pads, laptops, or desktops. Accordingly, any of the apparatuses, devices, and/or systems described herein may be implemented using some or all parts of the components described in FIGS. 1A and/or IB. [0051] In an aspect of the disclosure, the apparatus 102 provides the processing system 105 as a means for communicating with the remote system 104 including one or more reference nodes 106i, 1062, . . ., 106n that may be worn on body parts to obtain body positioning data relating to relative position between the body parts. The processing system 105 may provide a means for receiving at least a portion of the body positioning data from one or more of the reference nodes 106i, 1062, . . ., 106n. The processing system 105 may provide a means for communicating with the reference nodes 106i, 1062, . . ., 106n when worn on body parts of multiple users to obtain the body positioning data. Further, the apparatus 102 provides the processing system 105 as a means for providing body tracking based on the body positioning data, which may relate to ranging and/or angular position between each of the reference nodes 106i, 1062, . . ., 106n and a reference plane defined by one or more of the reference nodes. The processing system 105 may provide a means for generating at least a portion of the body positioning data. The apparatus 102 may provide a sensing means for generating reference data relating to relative position of at least one body part, wherein the sensing means comprises a sensor, such as sensor 108 in FIG. 1C. [0052] FIG. 2A shows an example of a process for scaling gesture recognition, in accordance with an aspect of the disclosure. In block 210, the apparatus 102 may communicate with the remote system 104 (block 210). In block 214, the apparatus 102 may prompt the user for input. Alternatively, the apparatus 102 may be configured to communicate with one or more of the reference nodes 106i, 1062, . . ., 106n and prompt the user for input. User input may be in the form of direct user input into the apparatus 102 or via a remote handheld device, and/or user input may be in the form of a learned behavior, as described herein. [0053] In block 218, the apparatus 102 may obtain or determine at least one physical dimension of the user. The at least one physical dimension may comprise at least one of height of the user, length of arm span of the user, and distance of the user from the apparatus 102. The apparatus 102 may be configured to obtain at least one physical dimension of the user by receiving the at least one physical dimension as an input by the user. The apparatus 102 may be configured to obtain at least one physical dimension of the user by learning the at least one physical dimension of the user from a sensor map having at least one sensor positioned proximate to the ground and a device held by the user at a physical height of the user. The apparatus 102 may be configured to obtain at least one physical dimension of the user by learning the at least one physical dimension of the user from a handheld device moved between a first position proximate to the ground and a second position proximate to at a physical height of the user. The apparatus 102 may be configured to obtain at least one physical dimension of the user by prompting the user to lift at least one arm until parallel to the ground to measure arm span of the user. [0054] In block 222, the apparatus 102 may identify at least one movement of the user. The apparatus 102 may be configured to identify the at least one movement of the user by capturing the at least one movement from at least one of a remote accelerometer, a remote ranging sensor, or a remote gyro. [0055] In block 226, the apparatus 102 may calibrate scale for the user. The apparatus 102 may be configured to calibrate scale for the user based on a relationship between the at least one physical dimension and the at least one movement of the user. [0056] In block 230, the apparatus 102 may determine a gesture of the user. The apparatus 102 may be configured to determine the gesture of the user based on the at least one physical dimension without regard to a location of the user relative to the apparatus 102. The apparatus 102 may be configured to identify at least one movement of the user, and determine the gesture of the user based also on the at least one identified movement. The apparatus 102 may be configured to determine the gesture as a calibration of scale based on a relationship between the at least one physical dimension and the at least one movement of the user. [0057] The apparatus 102 may utilize a lookup table determine the relationship between the at least one physical dimension and the at least one movement of the user. The apparatus 102 may define the relationship by utilizing an equation. [0058] In block 234, the apparatus 102 may be configured to optionally store information related to the determined gesture of the user. The apparatus 102 may be configured to store information related to the determined gesture of the user based on the at least one physical dimension without regard to a location of the user relative to the apparatus 102. The apparatus 102 may be configured to store information related to the identified movement of the user and store information related to the determined gesture of the user based also on the at least one identified movement. The apparatus 102 may be configured store information related to the determined gesture as a calibration of scale based on a relationship between the at least one physical dimension and the at least one movement of the user. Any information related to the determined gesture of the user may be stored or recorded in a computer readable medium. Obtaining, determining, identifying calibrating, scaling, storing, recording, and/or communicating information related to user gestures, user physical dimensions, and/or user movements may be utilized by the apparatus 102 to replay as an avatar of the user, without departing from the scope of the disclosure. [0059] It will be understood that the specific order or hierarchy of steps in the processes disclosed is an illustration of exemplary approaches. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the processes may be rearranged. The accompanying method claims elements of the various steps in a sample order, and are not meant to be limited to the specific order or hierarchy presented. [0060] FIG. 2B shows an example of a process for tracking user orientation, in accordance with an aspect of the disclosure. In block 240, the apparatus 102 may communicate with the remote system 104 including one or more reference nodes 106i, IO62, . . ., 106n. In block 244, the apparatus 102 may optionally prompt the user for input. The apparatus 102 may be configured to communicate with the one or more of the reference nodes IO61, IO62, . . ., 106n and prompt the user for input. User input may be in the form of direct user input via a user interface component (e.g., a handheld device), and/or user input may be in the form of a learned behavior, as described in greater detail herein. [0061] In block 248, the apparatus 102 obtains or determines body positioning data relating to relative position between body parts. The apparatus 102 may be configured to be worn on the body with the remote system 104 including one or more reference nodes IO61, IO62, . . ., 106n. The apparatus 102 may be configured to communicate with the remote system 104 including one or more reference nodes IO61, 1062, . . ., 106n to obtain the body positioning data. The remote system 104 may include a set of reference nodes IO61, 1062, . . ., 106n worn on the body to define a reference plane, and the body positioning data includes the reference plane defined by the remote system 104. The remote system 104 may include one or more additional nodes IO61, 1062, . . ., 106n worn on one or more body parts, and the body positioning data relates to a distance between each of the one or more additional nodes IO61, 1062, . . ., 106n and the reference plane. The body positioning data may include one or more physical dimensions of the body. [0062] In block 250, the apparatus 102 may be configured to generate at least a portion of the body positioning data. In an example, the apparatus 102 may obtain or determine body positioning data relating to relative position between body parts by generating at least a portion of the body positioning data. [0063] In block 252, the apparatus 102 may be configured to receive at least a portion of the body positioning data from the remote system 104 including from one or more of the reference nodes 106i, 1062, . . ., 106n. In an example, the apparatus 102 may obtain or determine body positioning data relating to relative position between body parts by receiving at least a portion of the body positioning data from the remote system 104 including from one or more of the reference nodes 106i, 1062, . . ., 106n. [0064] In block 254, the apparatus 102 may be configured to identify at least one movement of the user. The apparatus 102 may be configured for identifying at least one movement of the user by capturing the at least one movement from at least one of a remote accelerometer, a remote ranging sensor, or a remote gyro. The body positioning data may include one or more movements of the body. The positioning data may include a relationship between the one or more physical dimensions of the body and the one or more movements of a body. The body positioning data may include tracked body movements. The apparatus 102 may be configured to create an historical record of body movements from the body positioning data. [0065] In block 256, the apparatus 102 provides body tracking based on the body positioning data. The apparatus 102 may utilize the equation of FIG. 3H to determine a relationship between a reference plane and a reference node as related to a body. The apparatus 102 may define the relationship by utilizing the equation of FIG. 3H to track movement of at least one body part in reference to the body as defined by the reference plane. In an aspect of the disclosure, providing body tracking may include creating a historical record of the one or more physical dimensions of the body and/or the one or more movements of the body from the body positioning data. In another aspect of the disclosure, providing body tracking related to the user may include creating a historical record of a relationship between one or more physical dimensions of the body and one or more movements of the body from the body positioning data. [0066] In block 260, the apparatus 102 is configured to optionally store body tracking data related to a user. The apparatus 102 may be configured to store information related to the body tracking data relative to positioning data between body parts. The apparatus 102 may be configured to store data and information related to the identified movement of the user and store information related to body tracking of the user based also on the at least one identified movement. Any information related to body tracking data of the user may be stored or recorded in a computer readable medium. Obtaining, determining, identifying calibrating, scaling, storing, recording, and/or communicating information related to user body tracking data, user gestures, user physical dimensions, and/or user movements may be utilized by the apparatus 102 to replay as an avatar of the user, without departing from the scope of the disclosure. In an aspect of the disclosure, storing body tracking data related to the user may include creating a historical record of the one or more physical dimensions of the body and/or the one or more movements of the body from the body positioning data. In another aspect of the disclosure, storing body tracking data related to the user may include creating a historical record of a relationship between one or more physical dimensions of the body and one or more movements of the body from the body positioning data. [0067] It will be understood that the specific order or hierarchy of steps in the processes disclosed is an illustration of exemplary approaches. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the processes may be rearranged. The accompanying method claims elements of the various steps in a sample order, and are not meant to be limited to the specific order or hierarchy presented. [0068] FIG. 2C shows an example of a flow diagram for processing data and/or information related to tracking user orientation, in accordance with an aspect of the disclosure. The apparatus 102 is configured to communicate with the remote system 104 including the reference nodes 106i, 1062, . . ., 106n to receive sensing parameters 270 from the reference nodes 106i, 1062, . . ., 106n. In one implementation, the reference nodes 106i, 1062, . . ., 106n are worn on body parts of a user so that the apparatus 102 may obtain body positioning data 272 relating to relative position between the body parts of the user and provide body tracking 290 based on the body positioning data 272. The body positioning data 272 relates to ranging and/or angular position between each of the reference nodes 106i, 1062, . . ., 106n and a reference plane defined by one or more of the reference nodes 106i, 1062, . . ., 106n. The body positioning data 272 may include one or more physical dimensions of the body 274. The body positioning data 272 may include one or more movements of the body 276. The body positioning data 272 may include data related to a relationship between the one or more physical dimensions of the body and the one or more movements of the body 280. The body positioning data 272 may include a historical record of the one or more physical dimensions of the body 282. The body positioning data 272 may include a historical record of the one or more movements of the body 284. The body positioning data 272 may include other relevant data 286. [0069] The apparatus 102 is configured to communicate with the sensor 108 and/or each reference node 106i, 1062, . . ., 106n to receive data and information including sensing signals and/or sensing parameters that may include sensing data, sensing parameter data, raw data, reference data, and/or any other type of relevant data. The data, sensing signals, and/or sensing parameters may include a portion of body positioning data, physical dimensions data, body movement data, body tracking data, and/or various other relevant data. [0070] The apparatus 102 may be configured to generate at least a portion of the body positioning data and/or receive at least a portion of the body positioning data from one or more of the reference nodes 106i, 1062, . . ., 106n. The apparatus 102 may comprise a game console, and the apparatus 102 may be configured to support one or more gaming applications. The apparatus 102 may include means for supporting the apparatus 102 on the body of a user, such as some type of fastener, clip, snap, button, adhesive, etc., and/or the apparatus 102 may be supported by and/or attached to clothing, a belt, or harness. [0071] The apparatus 102 may be configured to communicate with the reference nodes 106i, 1062, . . ., 106n when worn on body parts of multiple users to obtain the body positioning data. As such, the body positioning data may relate to ranging and/or angular position between each of the reference nodes 106i, 1062, . . ., 106n worn on different users and/or a reference plane defined by one or more of the reference nodes 106i, 1062, . . ., 106n worn on at least one of the users. [0072] FIGS. 3A-3C are conceptual diagrams illustrating examples of the apparatus 102 and the remote system 104 being configured to determine at least one physical dimension of the user by utilizing one or more reference nodes 106i, 1062, . . ., 106n. [0073] For example, referring to FIG. 3 A, the apparatus 102 may determine at least one physical dimension of the user by learning the at least one physical dimension of the user from a node map 302 defining the range or distance between the apparatus 102 and at least one reference node 106i positioned proximate to the user. [0074] In another example, referring to FIG. 3B, the apparatus 102 may determine at least one physical dimension (e.g., length of arm span) of the user by learning the at least one physical dimension of the user from another node map 312 having at least one reference node 106i positioned proximate to one hand of the user and at least one other reference node IO62 positioned proximate to the other hand of the user at a physical arm span of the user. The apparatus 102 may be configured to determine a range or distance between the apparatus 102 and each reference node IO61, IO62 to thereby establish a geometric measurement (e.g., triangulation) therebetween. [0075] In one implementation, each reference node IO61, IO62 may be integrated as part of a handheld device, wherein learning the at least one physical dimension (e.g., length of arm span) of the user comprises moving the handheld device between a first position proximate to one outstretched arm of the user and a second position proximate to the other outstretched arm of the user. Determining arm span of the user may include prompting the user to physically lift each arm until parallel to ground level to measure arm span of the user, which comprises the distance between each hand or fingers of each hand when both arms are outstretched from the body and parallel with the ground. [0076] In another implementation, a first reference node IO61 may be integrated as part of a first handheld device, and a second reference node IO62 may be integrated as part of a second handheld device, wherein learning the at least one physical dimension (e.g., length of arm span) of the user comprises holding the first handheld device at a first position proximate to one hand of the user and holding the second handheld device at a second position proximate to the other hand of the user. Determining arm span of the user may include prompting the user to physically lift each arm until parallel to ground level to measure arm span of the user, which comprises the distance between each hand or fingers of each hand when both arms are outstretched from the body and parallel with the ground. [0077] In another example, referring to FIG. 3C, the apparatus 102 may determine at least one physical dimension (e.g., height) of the user by learning the at least one physical dimension of the user from another node map 322 having at least one reference node IO61 positioned proximate to ground level and at least one other reference node IO62 positioned proximate to the physical height of the user. The apparatus 102 may be configured to determine a range or distance between the apparatus 102 and each reference node 106i, 1062 to thereby establish a geometric measurement (e.g., triangulation) therebetween. [0078] In one implementation, each reference node 106i, 1062 may be integrated as part of a handheld device, wherein learning the at least one physical dimension (e.g., height) of the user comprises moving the handheld device between a first position proximate to ground level and a second position proximate to the physical height of the user. Determining height of the user may include prompting the user to physically position the handheld device at ground level to obtain a first reference point at ground level and then prompting the user to physically position the handheld device proximate to the user's head to obtain a second reference point at the physical height of the user. [0079] In another implementation, a first reference node 106i may be integrated as part of a first handheld device, and a second reference node 1062 may be integrated as part of a second handheld device, wherein learning the at least one physical dimension (e.g., height) of the user comprises holding the first handheld device at a first position proximate to ground level and holding the second handheld device at a second position proximate to the top of the head of the user. Determining height of the user may include prompting the user to physically position the first handheld device at ground level to obtain a first reference point at ground level and then prompting the user to physically position the second handheld device proximate to the head of the user to obtain a second reference point at the physical height of the user. [0080] In an aspect of the disclosure, the apparatus 102 is configured to communicate with at least one of the reference nodes 106i, 1062 to obtain body positioning data relating to the relative position of the other reference nodes between the body parts of the user and provide body tracking based on the body positioning data. In an example, the reference nodes 106i, 1062 are configured to communicate with each other to transfer body positioning data therebetween, and at least one of the reference nodes, such as reference node 106i, is configured to communicate with the apparatus 102 so that the apparatus obtains the body positioning data relating to relative position between the body parts of the user. [0081] It will be appreciated that one or more of the reference nodes 106i, 1062, . . .,106n may be positioned anywhere proximate to the user's body or body parts (e.g., hands, feet, head, abdomen, shoulders, etc.) to determine and/or obtain one or more physical dimensions of the user to scale user gestures according to the user's physical dimensions. [0082] It will be appreciated that any information related to a user including user physical dimensions, user movements, user movement parameters, user scaling parameters, user scaling parameters, any and all relationships between user physical dimensions and user movements, history of user physical dimensions, and history of user movements may be stored as part of a computer readable medium. [0083] FIG. 3D is a conceptual diagram illustrating an example of the apparatus 102 and the remote system 104 being configured to determine at least one movement of the user by utilizing one or more reference nodes 106i, 1062, . . ., 106n. [0084] For example, referring to FIG. 3D, the apparatus 102 may determine at least one movement of the user by learning the at least one movement of the user from changes in node maps 332, 334, 336, 338, which identify movement of at least one reference node 1061 in reference to the apparatus 102. The apparatus 102 may be configured to define movement as a change in position of at least one reference node 106i in reference to the position of the apparatus 102 and the position of at least one other reference node 1062. However, the apparatus 102 may be configured to define movement as a change in position of at least one reference node 106i in reference to only the position of the apparatus 102. [0085] Referring to node map 332 of FIG. 3D, the apparatus 102 may be configured to calculate a range or distance between the apparatus 102 and each reference node 106i, 1062 to thereby establish a first geometric measurement (e.g., triangulation) therebetween. The node map 332 refers to a first node configuration of the apparatus 102 in relation to the reference nodes 106i, 1062. Referring to node map 334, the user generates movement by moving the first reference node 106i to another position to establish a second node configuration as shown by node map 336. Referring to node map 336, the apparatus 102 is configured to calculate another range or distance between the apparatus 102 and each reference node 106i, 1062 to thereby establish a second geometric measurement (e.g., triangulation) therebetween. The movement range or distance may be determined by calculating the change in position. As such, referring to node map 338, the apparatus 102 is configured to calculate still another range or distance between the apparatus 102 and the change in position of the reference node 106i to thereby establish a third geometric measurement (e.g., triangulation) therebetween, which results in determining the range or distance of movement. [0086] It will be appreciated that any information related to node maps including node maps corresponding to user physical dimensions, user movements, user movement parameters, user scaling parameters, user scaling parameters, any and all relationships between user physical dimensions and user movements, history of user physical dimensions, and history of user movements may be stored as part of a computer readable medium. [0087] As described herein, user gestures may originate from any user body motion, movement, and/or pose, and user gestures include full body motion, movement, and/or pose and any body part motion, movement, and/or pose. For example, user gestures may include hand movements (e.g., punch, chop, lift, etc.), foot movements (e.g., kick, knee bend, etc.), head movements (e.g., head shake, nod, etc.), and/or body movements (e.g., jumping, kneeling, lying down, etc.). [0088] The apparatus 102 may be configured to determine user gestures as 2-dimensional and 3-dimensional spatial positioning of at least one body point (e.g., as defined by a node). The apparatus 102 may be configured to translate changes in 2-dimensional and 3- dimensional spatial positioning of a body point into a user gesture, which may be referred to as body motion, body movement, and/or changes between body poses. The apparatus 102 may be configured to determine 2-dimensional and 3-dimensional spatial positioning of a body point relevant to a node on a user's body and/or a node on another user's body. The apparatus 102 may be configured to determine 2-dimensional and 3- dimensional spatial positioning of a body point relevant to the apparatus 102. [0089] The apparatus 102 may be configured to determine user gestures (e.g., body motion, movement, and/or pose) by obtaining one or more physical dimensions that are between at least two body parts of the user (e.g., wrist and foot). The apparatus 102 may be configured to determine user gestures (e.g., body motion, movement, and/or pose) by obtaining one or more physical dimensions that are between at least two body parts of separate users (e.g., a distance between hands of different users). [0090] The apparatus 102 may be configured to determine user gestures by capturing signals from one or more remote ranging sensors that cover ranging (e.g., distance) between at least two body points on the user's body. The apparatus 102 may be configured to determine user gestures by capturing signals from one or more remote ranging sensors from a handheld device and/or a wearable device, belt, or harness that may be attached to the body, a body part, and/or as part of clothing. [0091] FIGS. 3E-3F are conceptual diagrams illustrating examples of the apparatus 102 and the remote system 104 being configured to determine body positioning data relating to a user by utilizing one or more reference nodes 106i, 1062, . . ., 106n. In one aspect of the disclosure, the apparatus 102 and one or more reference nodes 106i, 1062, . . ., 106n may be utilized by the apparatus 102 to define a reference plane on the body of the user. [0092] For example, referring to FIG. 3E, the apparatus 102 may determine body positioning data related to the user by defining a reference plane from a node map 352, which may include the distance between the apparatus 102 and reference nodes 106i, 1062 positioned proximate to the body of the user. The apparatus 102 may be configured to determine the reference plane based on a distance and angles between the apparatus 102 and each reference node 106i, 1062 to thereby establish a geometric measurement (e.g., triangulation) therebetween. The apparatus 102 and the reference nodes 106i, 1062 may be worn on the body of the user, such as, for example, at the waist of the user. The apparatus 102 and each reference node 106i, 1062 may be worn by the user as clothing, a belt, a harness, etc., or the apparatus 102 and each reference node 106i, 1062 may be attached to the user's body by some other means. The apparatus 102 may be worn by the user proximate to a front part of the user's waist, a first reference node 106i may be worn by the user proximate to one side of the user's waist, and a second reference node 1062 may be worn by the user proximate to the other side of the user's waist. In this arrangement, the apparatus 102 and references nodes 106i, 1062 may be configured to define the reference plane proximate to the user's waist and may further define body parts in their respective positions proximate to the user's waist. [0093] In another example, referring to FIG. 3F, the apparatus 102 may determine body positioning data related to the user by defining a reference plane and at least one additional node from a node map 354, which may include the distance between the apparatus 102 and reference nodes 106i, 1062 positioned proximate to the body of the user, and the distance of the apparatus 102 and each reference node 106i, 1062 to the additional node IO63 positioned proximate to a body part of the user. The apparatus 102 may be configured to determine the reference plane based on a distance and angles between the apparatus 102 and the first and second reference nodes IO61, 1062 to thereby establish a geometric measurement (e.g., triangulation) therebetween. The apparatus 102 may be further configured to establish another geometric measurement (e.g., triangulation) between a third reference node IO63 and the apparatus 102 and the first and second reference nodes IO61, 1062. The apparatus 102 and the first and second reference nodes IO61, 1062 may be worn on the body of the user, such as, for example, at the waist of the user to define the reference plane, as described in reference to FIG. 3E. In this arrangement, the apparatus 102 and references nodes IO61, 1062 may be configured to define the reference plane proximate to the user's waist. The third reference node IO63 may be worn proximate to a body part (e.g., head, hand, foot, knee, etc.) of the user, and the apparatus 102 may determine a 2-Dimensional or 3- Dimensional position of the third reference node IO63 in relation to the reference plane. Accordingly, the apparatus 102 is configured to obtain and/or determine body positing data of at least one body part in relation to a reference plane defined on the body. [0094] It will be appreciated that one or more of the reference nodes IO61, 1062, IO63, . . .,106n may be positioned anywhere proximate to the user's body or body parts (e.g., hands, feet, head, abdomen, waist, shoulders, etc.) to obtain and/or determine at least one body part in relation to the body according to the user's physical dimensions. [0095] It will be appreciated that any information related to a user including user physical dimensions, user movements, user movement parameters, user scaling parameters, user scaling parameters, any and all relationships between user physical dimensions and user movements, history of user physical dimensions, and history of user movements may be stored as part of a computer readable medium. [0096] FIG. 3G is a conceptual diagram illustrating an example of the apparatus 102 and the remote system 104 being configured to determine at least one movement of the user by utilizing one or more reference nodes IO61, 1062, IO63, . . ., 106n. [0097] For example, referring to FIG. 3G, the apparatus 102 may determine at least one movement of the user by learning the at least one movement of the user from changes in node maps 362, 364, 366, 368, which identify movement of at least one reference node IO63 in reference to the apparatus 102 and first and second nodes IO61, 1062. The apparatus 102 may be configured to define movement as a change in position of the third reference node IO63 in reference to the position of the apparatus 102 and the position of the first and second reference nodes I O61, 1062. However, the apparatus 102 may be configured to define movement as a change in position of the third reference node IO63 in reference to only the position of the apparatus 102. [0098] Referring to node map 362 of FIG. 3G, the apparatus 102 may be configured to calculate distance and angles between the apparatus 102 and each reference node IO61, IO62 to thereby establish a first geometric measurement (e.g., triangulation) therebetween for defining the reference plane. The apparatus 102 may be further configured to calculate another geometric measurement (e.g., triangulation) between the third reference node IO63 and the apparatus 102 and the first and second reference nodes IO61, 1062. The node map 362 refers to a first node configuration of the apparatus 102 in relation to the reference nodes IO61, 1062, IO63. [0099] Referring to node map 364, the user generates movement by moving the third reference node IO63 to another position to establish a second node configuration as shown by node map 366. Referring to node map 366, the apparatus 102 may be further configured to calculate another geometric measurement (e.g., triangulation) between the third reference node IO63 and the apparatus 102 and the first and second reference nodes IO61, 1062. The node map 362 refers to a second node configuration of the apparatus 102 in relation to the reference nodes IO61, 1062, IO63. As such, referring to node map 368, the apparatus 102 is configured to calculate still another distance and angles between the apparatus 102 and the change in position of the third reference node IO63 to thereby calculate another geometric measurement (e.g., triangulation) therebetween, which results in tracking the distance, angle and direction of movement of the third reference node IO63. [00100] In an aspect of the disclosure, the apparatus 102 is configured to communicate with at least one of the reference nodes IO61, 1062, IO63 to obtain ranging measurements, including geometric measurements, relating to the relative position of the other reference nodes. In an example, the reference nodes IO61, 1062, IO63 are configured to communicate with each other to transfer ranging measurements therebetween, and at least one of the reference nodes, such as reference node IO61, is configured to communicate with the apparatus 102 so that the apparatus 102 obtains the ranging measurements relating to relative position between the reference nodes 106i, [00101] In reference to FIGS. 1A-1C, the apparatus 102 may be configured to determine range and/or angular position between sensors (i.e., reference nodes I O61, I O62, . . ., 106n) worn on a body with various RF techniques including monitoring signal strength, monitoring signal attenuation, time of flight of a single signal with timing synchronization, round-trip delay, magnetic field sensing, etc. In one example, the apparatus 102 may be configured to determine range and/or angular position between sensors (i.e., reference nodes I O61, I O62, . . ., 106n) by a round-trip delay of a multiple signals sent to each sensor (i.e., reference nodes I O61, I O62, . . ., 106n) and/or round-trip delay of a single signal sent through multiple sensors (i.e., reference nodes I O61, 1062, . . ., 106n). The body positioning data may include data and information related to ranging and/or angular position between the apparatus 102 and each of the sensors (i.e., reference nodes I O61, I O62, . . ., 106n) to provide body tracking based on the body positioning data. The body positioning data may include data and information related to ranging and/or angular position between each of the sensors (i.e., reference nodes I O61, I O62, . . ., 106n) and a reference plane defined by one or more of the sensors (i.e., reference nodes I O61, I O62, . . ., 106n) to provide body tracking based on the body positioning data. [00102] It will be appreciated that any information related to node maps including node maps corresponding to user physical dimensions, user movements, user movement parameters, user scaling parameters, user scaling parameters, any and all relationships between user physical dimensions and user movements, history of user physical dimensions, and history of user movements may be stored as part of a computer readable medium. [00103] As described herein, user gestures may originate from any user body motion, movement, and/or pose, and user gestures include full body motion, movement, and/or pose and any body part motion, movement, and/or pose. For example, user gestures may include hand movements (e.g., punch, chop, lift, etc.), foot movements (e.g., kick, knee bend, etc.), head movements (e.g., head shake, nod, etc.), and/or body movements (e.g., jumping, kneeling, lying down, etc.). [00104] The apparatus 102 may be configured to determine user gestures as 2- dimensional and 3-dimensional spatial positioning of at least one body point (e.g., as defined by a node). The apparatus 102 may be configured to translate changes in 2- dimensional and 3-dimensional spatial positioning of a body point into a user gesture, which may be referred to as body motion, body movement, and/or changes between body poses. The apparatus 102 may be configured to determine 2-dimensional and 3- dimensional spatial positioning of a body point relevant to a node on a user's body and/or a node on another user's body. The apparatus 102 may be configured to determine 2-dimensional and 3-dimensional spatial positioning of a body point relevant to the apparatus 102. [00105] The apparatus 102 may be configured to track user orientation by defining a reference plane on the body (e.g., waist) of the user and tracking at least one other reference node on a body part (e.g., hand) of the user in relation to the reference plane to obtain body positioning data. The apparatus 102 may be configured to track user orientation by capturing signals from one or more remote ranging sensors that cover ranging (e.g., distance) between at least two body points on the user's body. The apparatus 102 may be configured to track user orientation by capturing signals from one or more remote ranging sensors from the apparatus 102 and one or more reference nodes 106i, 1062, IO63 in a wearable device, belt, or harness that may be attached to the body, a body part, and/or as part of clothing. [00106] The apparatus 102 may be configured to track user orientation and/or user gestures (e.g., body motion, movement, and/or pose) by determining and/or obtaining one or more physical dimensions that are between at least two body parts of the user (e.g., waist and hand, waist and foot, wrist and foot, etc.). The apparatus 102 may be configured to track user orientation and/or determine user gestures (e.g., body motion, movement, and/or pose) by obtaining one or more physical dimensions that are between at least two body parts of separate users, including, for example, a distance between reference planes defined on bodies of separate uses, a distance between waists, hands, feet, etc. of different users, etc.). [00107] FIG. 4 is a block diagram of an apparatus 102 suitable for implementing various aspects of the disclosure. In one embodiment, the apparatus 102 of FIG. 1C may be implemented with the apparatus 102 of FIG. 4. [00108] In accordance with an aspect of the disclosure, the apparatus 102 provides a means for interacting with the user comprising, for example, a user interface 402. The user interface 402 may include the utilization of one or more of an input component (e.g., keyboard), a cursor control component (e.g., mouse or trackball), and image capture component (e.g., analog or digital camera). The user interface 402 may include the utilization of a display component (e.g., CRT or LCD). [00109] In accordance with an aspect of the disclosure, the apparatus 102 comprises a processing system 404 that may be implemented with one or more processors. The one or more processors, or any of them, may be dedicated hardware or a hardware platform for executing software on a computer-readable medium. Software shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. [00110] In various implementations, the one or more processors may include, by way of example, any combination of microprocessors, microcontrollers, digital signal processors (DSPs), field programmable gate arrays (FPGAs), programmable logic devices (PLDs), state machines, gated logic, discrete hardware circuits, and other suitable processors configured to perform the various functionalities described throughout this disclosure. [00111] In accordance with aspects of the disclosure, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. In various implementations, the computer-readable medium may include both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a computer or processor. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer or processor. Also, any connection is properly termed a computer-readable medium. [00112] For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Thus, in some aspects, computer readable medium may comprise non-transitory computer readable medium (e.g., tangible media). In addition, in some aspects, computer readable medium may comprise transitory computer readable medium (e.g., a signal). Combinations of the above should also be included within the scope of computer-readable media. The computer-readable medium may be embodied in a computer-program product. By way of example, a computer-program product may include a computer-readable medium in packaging materials. [00113] In an aspect of the disclosure, the processing system 404 provides a means for communicating with the reference nodes 106i, 1062, . . ., 106n that may be worn on body parts of a user to obtain body positioning data relating to relative position between the body parts. The processing system 404 further provides a means for providing body tracking based on the body positioning data, which may relate to ranging and/or angular position between each of the reference nodes 106i, 1062, . . ., 106n and a reference plane defined by one or more of the reference nodes (e.g., as described in FIGS. 3E-3F). The processing system 404 may provide a means for generating at least a portion of the body positioning data. The processing system 404 may provide a means for providing body tracking that is configured to create a historical record of the one or more physical dimensions of the body and/or the one or more movements of the body from the body positioning data. The processing system 404 may provide a means for providing body tracking that is configured to create a historical record of a relationship between the one or more physical dimensions of the body and the one or more movements of the body from the body positioning data. The apparatus 102 may be configured to provide a means for generating reference data relating to relative position of at least one body part. The sensor 108 of FIG. 1C is an example of a sensing means. [00114] In accordance with an aspect of the disclosure, the apparatus 102 comprises a communication interface 406 having one or more communication components that may be implemented to receive and/or transmit signals via one or more communication links 408. For example, the communication interface 406 may comprise a short range communication component, such as a receiver, a transmitter, a receiver and a transmitter, or a transceiver. As such, the communication interface 406 may utilize a wireless communication component and an antenna, such as a mobile cellular device, a wireless broadband device, a wireless satellite device, or various other types of wireless communication devices including radio frequency (RF), microwave frequency (MWF), and/or infrared frequency (IRF) devices adapted for wireless communication. The communication interface 406 may be configured to receive information from a user, and/or the communication interface 406 may be configured to transmit information to a user. In another example, the communication interface 406 may comprise a network interface component (e.g., modem or Ethernet card) to receive and transmit wired and/or wireless signals. The communication interface 406 may be adapted to interface and communicate with various types of networks, such as local area networks (LAN), wide area networks (WAN) including the Internet, public telephone switched networks (PTSN), and/or various other wired or wireless networks, including telecommunications, mobile, and cellular phone networks. The communication interface 406 may be adapted to interface with a DSL (e.g., Digital Subscriber Line) modem, a PSTN (Public Switched Telephone Network) modem, an Ethernet device, and/or various other types of wired and/or wireless network communication devices adapted for wired and/or wireless communication. The communication interface 406 may be configured as a network gateway, such as an Internet gateway. [00115] In an aspect of the disclosure, the apparatus 102 provides a means for communicating comprising, for example, the communication interface 406 to communicate with the remote system 104 including one or more reference nodes 106i, IO62, . . ., 106n that may be worn on body parts to obtain body positioning data relating to relative position between the body parts. The communication component 406 may send the received body positioning data to the processing system 404. The communication interface 406 may include a means for receiving at least a portion of the body positioning data from one or more of the reference nodes 106i, 1062, . . ., 106n. The communication interface 406 may include a means for communicating with the reference nodes 106i, 1062, . . ., 106n when worn on body parts of multiple users to obtain the body positioning data. In various examples, the communication interface 406 comprises a means for communicating, which may comprise a receiver, a transmitter, a receiver and a transmitter, or a transceiver. [00116] The apparatus 102 may provide a means for generating reference data to relative position of at least one body part in relation to the apparatus 102, wherein the means for generating reference data comprises a sensor (e.g., the sensor 108 of FIG. 1C). [00117] FIG. 5 is a block diagram of an apparatus 102 suitable for implementing various aspects of the disclosure. The apparatus 102 may comprise a wired or wireless computing/processing/communication device (e.g., laptop, PC, PDA, mobile phone, game console, digital media player, television, etc.) capable of communicating with other wired or wireless devices (e.g., the remote system 104 and one or more of the reference nodes 106i, 1062, . . ., 106n). [00118] In accordance with various aspects of the disclosure, the apparatus 102 includes a processing system having a processor 504 and a bus 502 or other communication mechanism for communicating information, which interconnects subsystems and components, such as the processor 504 (e.g., processor, micro-controller, digital signal processor (DSP), etc.) and one or more computer readable media 500. The computer readable media 500 may include one or more of system memory 506 (e.g., RAM), static storage 508 (e.g., ROM), and disk drive storage 510 (e.g., magnetic or optical). The apparatus 102 includes a communication interface 512 (e.g., one or more wired or wireless communication components for short range communication and/or network communication), display 514 (e.g., CRT or LCD), input component 516 (e.g., keyboard), cursor control 518 (e.g., mouse or trackball), and image capture component 520 (e.g., analog or digital camera). The disk drive 510 may comprise a database having one or more disk drives. It should be appreciated that any one of the memory components 506, 508, 510 may comprise a computer readable medium and be integrated as part of the processing component 504 to store computer readable instructions or code related thereto for performing various aspects of the disclosure. The communication interface 512 may utilize a wireless communication component and an antenna to communicate over one or more communication links 530. [00119] In accordance with aspects of the disclosure, the apparatus 102 performs specific operations by the processor 504 executing one or more sequences of one or more instructions contained in the computer readable media 500, such as the system memory 506. Such instructions may be read into the system memory 506 from another computer readable medium, such as the static storage 508 and/or the disk drive 510. Hard-wired circuitry may be used in place of or in combination with software instructions to implement the disclosure. [00120] Logic may be encoded in the computer readable medium 500, which may refer to any medium that participates in providing instructions to the processor 504 for execution. Such a medium may take many forms, including but not limited to, nonvolatile media and volatile media. In various implementations, non-volatile media includes optical or magnetic disks, such as the disk drive 510, and volatile media includes dynamic memory, such as the system memory 506. In one aspect, data and information related to execution instructions may be transmitted to the apparatus 102 via transmission media, such as in the form of acoustic or light waves, including those generated during radio wave, micro wave, and infrared data communications. In various implementations, transmission media may include coaxial cables, copper wire, and fiber optics, including wires that comprise the bus 502 [00121] Some common forms of computer readable media 500 includes, for example, floppy disk, flexible disk, hard disk, magnetic tape, any other magnetic medium, CD- ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, RAM, PROM, EPROM, FLASH-EPROM, any other memory chip or cartridge, carrier wave, or any other medium from which a computer is adapted to read. [00122] In various aspects of the disclosure, execution of instruction sequences to practice the disclosure may be performed by the apparatus 102. In various other aspects of the disclosure, a plurality of apparatuses 102 coupled by the one or more communication links 530, such as a short range wired or wireless communication medium, and/or network based communication including LAN, WLAN, PTSN, and/or various other wired or wireless communication networks, including telecommunications, mobile, and cellular phone networks) may perform instruction sequences to practice the disclosure in coordination with one another. [00123] The apparatus 102 may transmit and receive messages, data, information and instructions, including one or more programs (i.e., application code) through the one or more communication links 530 and the communication interface 512. Received program code may be executed by the processor 504 as received and/or stored in the disk drive 510 or some other non-volatile memory or storage component for execution. [00124] Where applicable, various aspects provided by the disclosure may be implemented using hardware, software, or combinations of hardware and software. Also, where applicable, the various hardware components and/or software components set forth herein may be combined into composite components comprising software, hardware, and/or both without departing from aspects of the disclosure. Where applicable, the various hardware components and/or software components set forth herein may be separated into sub-components comprising software, hardware, or both without departing from the scope of the disclosure. In addition, where applicable, it is contemplated that software components may be implemented as hardware components and vice-versa. [00125] Software, in accordance with the disclosure, such as program code and/or data, may be stored on one or more computer readable mediums. It is also contemplated that software identified herein may be implemented using one or more general purpose or specific purpose computers and/or computer systems, networked and/or otherwise. Where applicable, the ordering of various steps described herein may be changed, combined into composite steps, and/or separated into sub-steps to provide features described herein. [00126] It should be appreciated that like reference numerals are used to identify like elements illustrated in one or more of the figures, wherein showings therein are for purposes of illustrating aspects of the disclosure and not for purposes of limiting the same. [0001] The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein, but is to be accorded the full scope consistent with the language of the claims, wherein reference to an element in the singular is not intended to mean "one and only one" unless specifically so stated, but rather "one or more." Unless specifically stated otherwise, the term "some" refers to one or more. A phrase referring to "at least one of a list of items refers to any combination of those items, including single members. As an example, "at least one of: a, b, or c" is intended to cover: a; b; c; a and b; a and c; b and c; and a, b and c. All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. No claim element is to be construed under the provisions of 35 U.S.C. § 112, sixth paragraph, unless the element is expressly recited using the phrase "means for" or, in the case of a method claim, the element is recited using the phrase "step for." |
Described herein are embodiments related to defect detection in memory components of memory systems with time-varying bit error rate. A processing device performs an error recovery flow (ERF) to recover a unit of data comprising data and a write timestamp indicating when the unit of data was written. The processing device determines whether to perform a defect detection operation to detect a defect in the memory component using a bit error rate (BER), corresponding to the read operation, and the write timestamp in the unit of data. The processing device initiates the defect detection operation responsive to the BER condition not being expected for the calculated W2R (based on the write timestamp). The processing device can use an ERF condition and the write timestamp to determine whether to perform the defect detection operation. The processing device initiates the defect detection operation responsive to the ERF condition not being expected the calculated W2R (based on the write timestamp). |
CLAIMSWhat is claimed is:1. A system comprising:a memory component; anda processing device, operatively coupled with the memory component, to:perform a read operation to read a unit of data comprising data and a write timestamp indicating when the unit of data was written to the memory component; detect an error recovery flow (ERF) condition, wherein the ERF condition is detected responsive to the ERF being performed to recover the unit of data responsive to one or more errors being detected in the read operation;detect a bit error rate (BER) condition, wherein the BER condition is detected responsive to a BER, corresponding to the read operation, satisfying a threshold criterion;determine a write-to-read (W2R) delay for the read operation using a current time of the read operation and the write timestamp;determine whether the BER condition or the ERF condition is expected for the W2R delay; andinitiate a defect detection operation responsive to the BER condition or the ERF condition not being expected for the W2R delay.2. The system of claim 1, wherein, after the unit of data is recovered by the ERF, the processing device is further to:determine whether the BER satisfies a threshold criterion when the unit of data is read by the read operation with an initial read voltage level; andresponsive to the BER satisfying the threshold criterion, initiate the defect detection operation to detect the defect in the memory component.3. The system of claim 1, wherein, after the unit of data is recovered by the ERF, the processing device is further to:determine whether a re-read operation is performed in the ERF, wherein the re-read operation is performed with a different read voltage level than an initial read voltage level used by the read operation before the ERF is performed; and responsive to the re-read operation being performed in the ERF, initiate the defect detection operation to detect the defect in the memory component.4. The system of claim 1, wherein the processing device is further to:determine whether the BER satisfies the threshold criterion when the unit of data is read by the read operation with an initial read voltage level;determine whether a re-read operation is performed in the ERF, wherein the re-read operation is performed with a different read voltage level than the initial read voltage level; andresponsive to the BER satisfying the threshold criterion and responsive to the re-read operation being performed in the ERF, initiate the defect detection operation to detect the defect in the memory component.5. The system of claim 1, wherein the processing device is further to:perform the read operation with an initial read voltage level on a plurality of memory cells to read the unit of data in the memory component before the ERF is performed;perform a re-read operation with a second read voltage level on the plurality of memory cells to recover the unit of data as part of the ERF, wherein the second read voltage level is different than the initial read voltage level; andinitiate the defect detection operation to detect the defect in the memory component after the unit of data is recovered.6. The system of claim 1, wherein the processing device is further to:detect the one or more errors in the unit of data read from a plurality of memory cells of the memory component using an initial read voltage level; andin response to detection of one or more errors in the unit of data, perform a re-read operation with a second read voltage level on the plurality of memory cells to recover the unit of data as part of the ERF, wherein the second read voltage level is different than the initial read voltage level; andinitiate the defect detection operation to detect the defect in the memory component after the unit of data is recovered.7. The system of claim 1, wherein the processing device is further to:obtain the write timestamp; and issue a write operation to write the data and the write timestamp as the unit of data in the memory component.8. The system of claim 1, wherein the processing device is further to:obtain the write timestamp;obtain a write temperature value indicating a temperature when the unit of data was written; andissue a write operation to write the data, the write timestamp, and the temperature value as the unit of data in the memory component.9. A method comprising:issuing a read operation with a specified read voltage level to read a unit of data in a memory component;determining that the unit of data from the read operation is not successfully decoded because of an error;performing an error recovery flow (ERF) to recover the unit of data, wherein performing the ERF comprises issuing a re-read operation with a different read voltage level than the specified read voltage level;determining a write-to-read (W2R) delay for the read operation using a current time of the read operation and a write timestamp stored in connection with the unit of data;determining whether the W2R delay is within a range of W2R delays specified for the specified read voltage level; andinitiating a defect test routine responsive to the W2R delay for the read operation being within the range of W2R delays specified for the specified read voltage level.10. The method of claim 9, further comprising:obtaining the write timestamp; andissuing a write operation to write the data and the write timestamp as the unit of data in the memory component.11. The method of claim 9, further comprising:determining whether a bit error rate (BER), corresponding to the read operation, satisfies a threshold criterion when unit of data is read by the read operation with the specified read voltage level, wherein initiating the defect test routine comprises initiated the defect test routine responsive to the BER satisfying the threshold and the W2R delay being within the range of W2R delays specified for the specified read voltage level.12. The method of claim 9, further comprising:determining whether the re-read operation is performed in the ERF, and wherein initiating the defect test routine comprises initiated the defect test routine responsive to the re read operation being performed in the ERF and the W2R delay being within the range of W2R delays specified for the specified read voltage level.13. The method of claim 9, further comprising:determining whether a bit error rate (BER), corresponding to the read operation, satisfies a threshold criterion when unit of data is read by the read operation with the specified read voltage level; anddetermining whether the re-read operation is performed in the ERF, and wherein initiating the defect test routine comprises initiated the defect test routine responsive to the BER satisfying the threshold criterion, the re-read operation being performed in the ERF, and the W2R delay being within the range of W2R delays specified for the specified read voltage level.14. The method of claim 9, further comprising:detecting one or more errors in the unit of data read from the memory component using the specified read voltage level; andin response to detection of one or more errors in the unit of data, performing the re read operation with the different read voltage level to recover the unit of data as part of the ERF, wherein initiating the defect test routine comprises initiating the defect test routine after the unit of data is recovered.15. Anon-transitory computer-readable storage medium comprising instructions that, when executed by a processing device, cause the processing device to:issue a read operation with a specified read voltage level to read a unit of data in a memory component;determine that the unit of data from the read operation is not successfully decoded because of an error; perfonn an error recovery flow (ERF) to recover the unit of data, wherein performing the ERF comprises issuing a re-read operation with a different read voltage level than the specified read voltage level;determine a write-to-read (W2R) delay for the read operation using a current time of the read operation and a write timestamp stored in connection with the unit of data;determine whether the W2R delay is within a range of W2R delays specified for the specified read voltage level; andinitiate a defect test routine responsive to the W2R delay for the read operation being within the range of W2R delays specified for the specified read voltage level.16. The non-transitory computer-readable storage medium of claim 15, wherein the processing device is further to:obtain the write timestamp; andissue a write operation to write the data and the write timestamp as the unit of data in the memory component.17. The non-transitory computer-readable storage medium of claim 15, wherein the processing device is further to:determine whether a bit error rate (BER), corresponding to the read operation, satisfies a threshold criterion when unit of data is read by the read operation with the specified read voltage level, wherein the defect test routine is initiated responsive to the BER satisfying the threshold and the W2R delay being within the range of W2R delays specified for the specified read voltage level.18. The non-transitory computer-readable storage medium of claim 15, wherein the processing device is further to:determine whether the re-read operation is performed in the ERF, and wherein the defect test routine is initiated the defect test routine responsive to the re-read operation being performed in the ERF and the W2R delay being within the range of W2R delays specified for the specified read voltage level.19. The non-transitory computer-readable storage medium of claim 15, wherein the processing device is further to: detennine whether a bit error rate (BER), corresponding to the read operation, satisfies a threshold criterion when unit of data is read by the read operation with the specified read voltage level; anddetermine whether the re-read operation is performed in the ERF, and wherein the defect test routine is initiated the defect test routine responsive to the BER satisfying the threshold criterion, the re-read operation being performed in the ERF, and the W2R delay being within the range of W2R delays specified for the specified read voltage level.20. The non-transitory computer-readable storage medium of claim 15, wherein the processing device is further to:detect one or more errors in the unit of data read from the memory component using the specified read voltage level; andin response to detection of one or more errors in the unit of data, perform the re-read operation with the different read voltage level to recover the unit of data as part of the ERF, wherein the defect test routine is initiating the defect test routine after the unit of data is recovered. |
DEFECT DETECTION IN MEMORIES WITH TIME-VARYING BIT ERROR RATETECHNICAL FIELD[001] Embodiments of the disclosure relate generally to memory sub-systems, and more specifically, relate to defect detection in memory components of a memory sub-system with time-varying bit error rates.BACKGROUND[002] A memory sub-system can be a storage system, such as a solid-state drive (SSD), or a hard disk drive (HDD). A memory sub-system can be a memory module, such as a dual in-line memory module (DIMM), a small outline DIMM (SO-DIMM), or a non-volatile dual in-line memory module (NVDIMM). A memory sub-system can include one or more memory components that store data. The memory components can be, for example, non volatile memory components and volatile memory components. In general, a host system can utilize a memory sub-system to store data at the memory components and to retrieve data from the memory components.BRIEF DESCRIPTION OF THE DRAWINGS[003] The present disclosure will be understood more fully from the detailed description given below and from the accompanying drawings of various embodiments of the disclosure.[004] FIG. 1 illustrates an example computing environment that includes a memory sub-system in accordance with some embodiments of the present disclosure.[005] FIG. 2 is a flow diagram of an example method to initiate a defect detection operation to detect a defect in a memory component using a bit error rate (BER),corresponding to the read operation, or an error recover flow (ERF) indicator, and a write timestamp in accordance with some embodiments of the present disclosure.[006] FIG. 3 is a flow diagram of an example method to determine whether a W2R delay is within a range of W2R delays specified for an initial read voltage level in accordance with some embodiments of the present disclosure.[007] FIG. 4A is a graph that illustrates BER as a function of W2R delays for three read voltage levels a read voltage level in accordance with some embodiments of the present disclosure.[008] FIG. 4B is a graph that illustrates a W2R delay range, which is expected to achieve a good BER, for a default read level for one of three read voltage levels of FIG. 4A in accordance with some embodiments of the present disclosure. [009] FIG. 5 is a block diagram of a hardware circuit that triggers a defect detection operation in a central processing unit (CPU) of a memory system in accordance with some embodiments of the present disclosure.[0010] FIG. 6 is a block diagram of an example computer system in which embodiments of the present disclosure can operate.DETAILED DESCRIPTION[0024] Aspects of the present disclosure are directed to defect detection in memory sub systems with time-varying bit error rates (BER). A memory sub-system is also hereinafter referred to as a“memory device.” An example of a memory sub-system is a storage device that is coupled to a central processing unit (CPU) via a peripheral interconnect (e.g., an input/output bus, a storage area network). Examples of storage devices include a solid-state drive (SSD), a flash drive, a universal serial bus (USB) flash drive, and a hard disk drive (HDD). Another example of a memory sub-system is a memory module that is coupled to the CPU via a memory bus. Examples of memory modules include a dual in-line memory module (DIMM), a small outline DIMM (SO-DIMM), a non-volatile dual in-line memory module (NVDIMM), etc. The memory sub-system can be, for instance, a hybridmemory/storage sub-system. In general, a host system can utilize a memory sub-system that includes one or more memory components. The host system can provide data to be stored at the memory sub-system and can request data to be retrieved from the memory sub-system.[0025] The memory sub-system can include multiple memory components that can store data from the host system. Each memory component can include a different type of media. Examples of media include, but are not limited to, a cross-point array of non-volatile memory and flash based memory such as single-level cell (SLC) memory, triple-level cell (TLC) memory, and quad-level cell (QLC) memory. The characteristics of different types of media can be different from one media type to another media type. One example of a characteristic associated with a memory component is data density. Data density corresponds to an amount of data (e.g., bits of data) that can be stored per memory cell of a memory component. Using the example of a flash based memory, a quad-level cell (QLC) can store four bits of data while a single-level cell (SLC) can store one bit of data. Accordingly, a memory component including QLC memory cells will have a higher data density than a memory component including SLC memory cells. Another example of a characteristic of a memory component is access speed. The access speed corresponds to an amount of time for the memory component to access data stored at the memory component. [0026] Other characteristics of a memory component can be associated with the endurance of the memory component to store data. When data is written to and/or erased from a memory cell of a memory component, the memory cell can be damaged. As the number of write operations and/or erase operations performed on a memory cell increases, the probability that the data stored at the memory cell including an error increases, and the memory cell is increasingly damaged. A characteristic associated with the endurance of the memory component is the number of write operations or a number of program/erase operations performed on a memory cell of the memory component. If a threshold number of write operations performed on the memory cell is exceeded, then data can no longer be reliably stored at the memory cell as the data can include a large number of errors that cannot be corrected. Different media types can also have difference endurances for storing data. For example, a first media type can have a threshold of 1,000,000 write operations, while a second media type can have a threshold of 2,000,000 write operations. Accordingly, the endurance of the first media type to store data is less than the endurance of the second media type to store data.[0027] Another characteristic associated with the endurance of a memory component to store data is the total number of bytes written to a memory cell of the memory component. Similar to the number of write operations, as new data is written to the same memory cell of the memory component the memory cell is damaged and the probability that data stored at the memory cell includes an error increases. If the number of total bytes written to the memory cell of the memory component exceeds a threshold number of total bytes, then the memory cell can no longer reliably store data.[0028] Another characteristic associated with a memory component is time-varying BER. In particular, some non-volatile memories (e.g., NAND, phase change, etc.) have threshold voltage (Vt) distributions that move as a function of time. With a same read level, if Vt distributions move, the BER changes. Given a Vt distribution at an instance in time, there is an optimal read level or optimal read level range that achieves a lowest bit error rate. In particular, the Vt distribution and BER can be a function of write-to-read (W2R) delay. Due to this time-varying nature of BER, as well as other noise mechanisms in memory, a single read level is not sufficient to achieve best memory read BER to meet some system reliability targets. A single read level, such as illustrated in three read levels of FIG. 4, achieves a low BER at short W2R delay but BER is high at longer delays. Multiple read levels, such as illustrated in FIG. 4, can be used in combination to achieve low BER at the entire range of W2R delay. [0029] Non-volatile memory can have multiple noise mechanisms that increase BER, such a write wear, disturb, defect, or the like. However, during error recovery, read retry operations use different read levels to recover data. Read retry operations are used to achieve lowest BER. For memories with W2R delay dependent BER, read retry operations are also used to handle a wide range of W2R delays.[0030] One particular problem in memory systems is how to detect grown defects. In particular, as the NVM based system operates through its life time, defect pages, defect blocks, and defect dies may grow. In order to detect such grown defects, especially read failure related grown defects, typically a test routine is invoked to make sure the high BER or even uncorrectable error correction code (UECC) events are not induced by transient errors. Such test routines can be invoked periodically to detect defects in the system. However, defects can grow and show up at any time during host access. This is especially true in a very high performance system where many accesses to the memory can occur between periodic defect test routines. Also, for memories with W2R delay dependent BER, high BER or read retry events can largely be caused by the workload, meaning the conventional criteria for triggering defect test routines can generate a lot of false alarms, hurting system performance. Conventional memory sub-systems typically do not have on-demand trigger criteria for such defect test routines.[0031] Aspects of the present disclosure address the above and other deficiencies by providing on-demand trigger criteria for such defect test routines, based on metrics such as decoder statistics or based on read retry statistics, for memories with time-varying BER. In particular, the present disclosure includes an innovative approach for defect detection in memories with time-varying BER, in particular, with BER dependent on W2R delay. A write timestamp is written to the memory together with data for each write operation. After each read (possibly with error recovery flow), the system determines whether to trigger defect test routines based on the combination of its W2R delay and other stats, including decoder statistics and error recovery flow statistics. The present disclosure defines when a test routine can be involved to make sure the high BER or even UECC events are not induced by transient errors. These test routines can be invoked on-demand, as opposed to periodically as done conventionally. Also, the present disclosure addresses how to detect defects that grow and show up at any time during host access. Also, the present disclosure addresses how to reduce false alarms that hurt system performance, since the defects can be detected from other events that are largely caused by the workload. That is, the present disclosure minimizes false alarms and reduces performance penalty caused by defect management algorithms that are run periodically and triggered by events that are not associated with defects. As described herein, the on-demand criterion can apply to every read operation and can effectively detect abnormal high RBER events to trigger defect detection algorithms. The trigger criterion can be implemented in hardware, software, or any combination thereof impacting system performance.[0032] In one implementation, a processing device performs a read operation to read a unit of data comprising data and a write timestamp indicating when the unit of data was written to the memory component. The processing device possibly performs an error recovery flow (ERF) to recover the unit of data responsive to one or more errors being detected in the read operation. The processing device determines whether to perform a defect detection operation to detect a defect in the memory component using a BER, corresponding to the read operation, and the write timestamp. In another embodiment, the processing device determines whether to perform a defect detection operation to detect a defect in the memory component using an indication of an ERF being performed (also referred to as ERF indicator) and the write timestamp. The ERF being performed can be an indication of a defect in the memory component as well. The processing device initiates the defect detection operation responsive to the write timestamp being within a specified range corresponding to an initial read voltage level of the read operation. Additional details of defect detection in memory components with time-varying BER are described in more detail below.[0011] FIG. 1 illustrates an example computing environment 100 that includes a memory sub-system 110 in accordance with some embodiments of the present disclosure. The memory sub-system 110 can include media, such as memory components 112A to 112N. The memory components 112 A to 112N can be volatile memory components, non-volatile memory components, or a combination of such. In some embodiments, the memory sub system is a storage system. An example of a storage system is a SSD. In some embodiments, the memory sub-system 110 is a hybrid memory/storage sub-system. In general, the computing environment 100 can include a host system 120 that uses the memory sub-system 110. For example, the host system 120 can write data to the memory sub-system 110 and read data from the memory sub-system 110.[0012] The host system 120 can be a computing device such as a desktop computer, laptop computer, network server, mobile device, or such computing device that includes a memory and a processing device. The host system 120 can include or be coupled to the memory sub -system 110 so that the host system 120 can read data from or write data to the memory sub-system 110. The host system 120 can be coupled to the memory sub-system 110 via a physical host interface. As used herein,“coupled to” generally refers to a connection between components, which can be an indirect communicative connection or direct communicative connection (e.g., without intervening components), whether wired or wireless, including connections such as electrical, optical, magnetic, etc. Examples of a physical host interface include, but are not limited to, a serial advanced technology attachment (SATA) interface, a peripheral component interconnect express (PCIe) interface, universal serial bus (USB) interface, Fibre Channel, Serial Attached SCSI (SAS), etc. The physical host interface can be used to transmit data between the host system 120 and the memory sub -system 110. The host system 120 can further utilize an NVM Express (NVMe) interface to access the memory components 112A to 112N when the memory sub -system 110 is coupled with the host system 120 by the PCIe interface. The physical host interface can provide an interface for passing control, address, data, and other signals between the memory sub-system 110 and the host system 120.[0013] The memory components 112A to 112N can include any combination of the different types of non-volatile memory components and/or volatile memory components. An example of non-volatile memory components includes a negative-and (NAND) type flash memory. Each of the memory components 112A to 112N can include one or more arrays of memory cells such as single level cells (SLCs) or multi-level cells (MLCs) (e.g., triple level cells (TLCs) or quad-level cells (QLCs)). In some embodiments, a particular memory component can include both an SLC portion and a MLC portion of memory cells. Each of the memory cells can store one or more bits of data (e.g., data blocks) used by the host system 120. Although non-volatile memory components such as NAND type flash memory are described, the memory components 112A to 112N can be based on any other type of memory such as a volatile memory. In some embodiments, the memory components 112A to 112N can be, but are not limited to, random access memory (RAM), read-only memory (ROM), dynamic random access memory (DRAM), synchronous dynamic random access memory (SDRAM), phase change memory (PCM), magneto random access memory (MRAM), negative-or (NOR) flash memory, electrically erasable programmable read-only memory (EEPROM), and a cross-point array of non-volatile memory cells. A cross-point array of non volatile memory can perform bit storage based on a change of bulk resistance, in conjunction with a stackable cross-gridded data access array. Additionally, in contrast to many flash- based memories, cross-point non-volatile memory can perform a write in-place operation, where a non-volatile memory cell can be programmed without the non-volatile memory cell being previously erased. Furthermore, the memory cells of the memory components 112A to 112N can be grouped as memory pages or data blocks that can refer to a unit of the memory component used to store data.[0014] The memory system controller 115 (hereinafter referred to as“controller”) can communicate with the memory components 112A to 112N to perform operations such as reading data, writing data, or erasing data at the memory components 112A to 112N and other such operations. The controller 115 can include hardware such as one or more integrated circuits and/or discrete components, a buffer memory, or a combination thereof. The controller 115 can be a microcontroller, special purpose logic circuitry (e.g., a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), or other suitable processor. The controller 115 can include a processor (processing device) 117 configured to execute instructions stored in local memory 119. In the illustrated example, the local memory 119 of the controller 115 includes an embedded memory configured to store instructions for performing various processes, operations, logic flows, and routines that control operation of the memory sub-system 110, including handling communications between the memory sub-system 110 and the host system 120. In some embodiments, the local memory 119 can include memory registers storing memory pointers, fetched data, etc. The local memory 119 can also include read-only memory (ROM) for storing micro-code. While the example memory sub-system 110 in FIG. 1 has been illustrated as including the controller 115, in another embodiment of the present disclosure, a memory sub-system 110 does not include a controller 115, and can instead rely upon external control (e.g., provided by an external host, or by a processor or controller separate from the memory sub-system).[0015] In general, the controller 115 can receive commands or operations from the host system 120 and can convert the commands or operations into instructions or appropriate commands to achieve the desired access to the memory components 112A to 112N. The controller 115 can be responsible for other operations such as wear leveling operations, garbage collection operations, error detection and error-correcting code (ECC) operations, encryption operations, caching operations, and address translations between a logical block address and a physical block address that are associated with the memory components 112A to 112N. The controller 115 can further include host interface circuitry to communicate with the host system 120 via the physical host interface. The host interface circuitry can convert the commands received from the host system into command instructions to access the memory components 112A to 112N as well as convert responses associated with the memory components 112A to 112N into information for the host system 120.[0033] The memory sub-system 110 can also include additional circuitry or components that are not illustrated. In some embodiments, the memory sub-system 110 can include a cache or buffer (e.g., DRAM) and address circuitry (e.g., a row decoder and a column decoder) that can receive an address from the controller 115 and decode the address to access the memory components 112A to 112N.[0034] The memory sub-system 110 includes a defect detection component 113 that can be used to determine whether to perform a defect detection operation to detect a defect in a memory component using a BER or ERF indicator and a write timestamp in the unit of data, write timestamp indicating when the unit of data was written to the memory component. The defect detection component 113 can trigger a defect detection operation responsive to the BER satisfying the BER threshold and the calculated W2R (based on the write timestamp) is within the range of W2R delays specified for the initial read voltage level. In some embodiments, the controller 115 includes at least a portion of the defect detection component 113. For example, the controller 115 can include a processor 117 (processing device) configured to execute instructions stored in local memory 119 for performing the operations described herein. In some embodiments, the defect detection component 113 is part of the host system 120, an application, or an operating system.[0035] The defect detection component 113 can determine whether the BER,corresponding to the read operation, satisfies a threshold criterion when unit of data is read from any one of the memory components 112A to 112N by the read operation with the initial read voltage level. Responsive to the BER satisfying the threshold criterion, the defect detection component 113 can initiate or otherwise perform the defect detection operation to detect the defect in the respective memory component using the BER, corresponding to the read operation, and the write timestamp. For example, after the unit of data is recovered, the defect detection component 113 can determine whether a re-read operation is performed in the ERF. The re-read operation is performed with a different read voltage level than an initial read voltage level used with an initial read operation before the ERF is performed.Responsive to the re-read operation being performed in the ERF, the defect detection component 113 can initiate or otherwise perform the defect detection operation to detect the defect in the memory component using the BER, corresponding to the read operation, and the write timestamp. In another embodiment, the defect detection component 113 can determine whether an ERF has been performed to satisfy a threshold criterion when unit of data is read from any one of the memory components 112A to 112N by the read operation with the initial read voltage level. Responsive to the ERF satisfying the threshold criterion, the defect detection component 113 can initiate or otherwise perform the defect detection operation to detect the defect in the respective memory component using the indication of the ERF and the write timestamp. For example, after the unit of data is recovered, the defect detection component 113 can determine whether a re-read operation is performed in the ERF. The re read operation is performed with a different read voltage level than an initial read voltage level used with an initial read operation before the ERF is performed. Responsive to the re read operation being performed in the ERF, the defect detection component 113 can initiate or otherwise perform the defect detection operation to detect the defect in the memory component using the indication of ERF and the write timestamp.[0036] FIG. 2 is a flow diagram of an example method 200 to initiate a defect detection operation to detect a defect in a memory component using a bit error rate (BER),corresponding to a read operation, or an ERF indicator, and a write timestamp in accordance with some embodiments of the present disclosure. The method 200 can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the method 200 is performed by the memory defect detection component 113 of FIG. 1. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible.[0037] At operation 210, the processing device performs a read operation to read a unit of data comprising data and a write timestamp indicating when the unit of data was written to the memory component. At operation 220, the processing device detects a high BER condition or an error recovery flow (ERF) condition. The high BER condition can be detected responsive to a BER, corresponding to the read operation, satisfying a BER threshold criterion. The ERF condition can be detected when an ERF is performed to recover the unit of data responsive to one or more errors being detected in the read operation. When the ERF is performed, there can be an indication that the ERF has been performed, such as an ERF indicator. The ERF indicator, representing the ERF being performed for the read operation, can serve as an indicator of a defect in the memory component. At operation 230, the processing device determines a write-to-read (W2R) delay for the read operation using a current time of the read operation and the write timestamp. At operation 240, the processing device determines whether the BER condition or the ERF condition is expected for the W2R delay. At operation 250, the processing device initiates the defect detection operation responsive to the BER condition, corresponding to an initial read voltage level of the read operation, or the ERF condition not being expected for the W2R delay. For example, as illustrated in FIG. 4, the processing device can store an expected range of BER within a specified range of W2R delays and the BER and the W2R, corresponding of the initial read operation, can be compared against the expected range of BER for the specified range of W2R delays to determine whether to initiate the defect detection operation. Responsive to the BER being higher than the expected range of BER and the W2R delay is within the range of W2R delays specified for the initial read voltage level, the defect detection operation is initiated. Responsive to the BER being within the expected range of BER or W2R delay is outside the range of W2R delays specified for the initial read voltage level, the defect detection operation is not initiated.[0038] In a further embodiment, after the unit of data is recovered by the ERF, the processing device determines whether the BER satisfies a threshold criterion when the unit of data is read by the read operation with the initial read voltage level. The processing device initiates the defect detection operation to detect the defect in the memory component using the BER and the write timestamp responsive to the BER satisfying the threshold criterion. When the BER does not satisfy the threshold criterion, the processing device does not initiate the defect detection operation and the read operation is completed.[0039] In another embodiment, after the unit of data is recovered, the processing device determines whether a re-read operation is performed in the ERF. The re-read operation is performed with a different read voltage level than an initial read voltage level used with the read operation before the ERF is performed. Responsive to the re-read operation being performed in the ERF, the processing device initiates the defect detection operation to detect the defect in the memory component using the ERF indicator and the write timestamp. If there is no re-read operation performed in the ERF, the processing device does not initiate the defect detection operation and the read operation is completed.[0040] In another embodiment, the processing device determines whether the BER satisfies a threshold criterion when unit of data is read by the read operation with an initial read voltage level. The processing device determines whether a re-read operation is performed in the ERF. As noted above, the re-read operation is performed with a different read voltage level than the initial read voltage level. Responsive to the BER satisfying the threshold criterion and responsive to the re-read operation being performed in the ERF, the processing device initiates the defect detection operation to detect the defect in the memory component using the BER and the write timestamp. Responsive to the BER not satisfying the threshold criterion or no re-read operation being performed in the ERF, the processing device does not initiate the defect detection operation and the read operation is completed.[0041] In another embodiment, the processing device performs a read operation with a first read voltage level on a set of memory cells to read the unit of data in the memory component before the ERF is performed. The processing device performs a re-read operation with a second read voltage level on the set of memory cells to recover the unit of data as part of the ERF. The second read voltage level is different than the first read voltage level. The processing device initiates the defect detection operation to detect the defect in the memory component after the unit of data is recovered.[0042] In another embodiment, the processing device detects one or more errors in the unit of data read from a set of memory cells of the memory component using a default read voltage level. In response to detection of one or more errors in the unit of data, the processing device performs a re-read operation with a second read voltage level on the set of memory cells to recover the unit of data as part of the ERF. As noted above, the second read voltage level is different than the default read voltage level. The processing device initiates the defect detection operation to detect the defect in the memory component after the unit of data is recovered.[0043] In another embodiment, the processing device receives a request to write data to a memory component. The processing device obtains a write timestamp and issues issue the write operation to write the data and the write timestamp as the unit of data in the memory component.[0044] In another embodiment, the processing device obtains the write timestamp, obtains a write temperature value indicating a temperature when the unit of data was written. The processing device issues issue a write operation to write the data, the write timestamp, and the temperature value as the unit of data in the memory component. In otherembodiments, additional metadata can be stored in connection with the write timestamp in the unit of data. The metadata can be used in connection with the defect detection operation.[0045] In another embodiment, the processing device can determine to perform a defect detection operation even when a ERF is not performed. For example, the original read operation succeeds, but the processing device determines that the BER is higher than expected and the W2R is within the range of W2R delays specified for the initial read. In this case, the processing logic can perform the defect detection operation to detect a defect in the memory component.[0046] In another embodiment, at operation 230, instead of using the BER and the write time stamp, the processing device can determine whether to perform a defect detection operation to detect a defect in the memory component using an indication of a ERF being performed as a result of an unsuccessful initial read operation and the W2R (based on the write timestamp in the unit of data) is within the range of W2R delays specified for the initial read.[0047] FIG. 3 is a flow diagram of an example method 300 to determine whether a W2R delay is within a range of W2R delays specified for an initial read voltage level in accordance with some embodiments of the present disclosure. The method 300 can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the method 300 is performed by the memory defect detection component 113 of FIG. 1. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel. Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible.[0048] At operation 310, the processing device issues a read operation with a specified read voltage level to read a unit of data in a memory component. At operation 320, the processing device determines whether the unit of data from the read operation is successfully decoded because of an error. When the processing device determines that the unit of data from the read operation is successfully decoded at operation 320, the processing device determines whether a bit error rate (BER) satisfies a BER threshold criterion when unit of data is read by the read operation with the specified read voltage level at operation 325. Responsive to the BER satisfying the BER threshold criterion at operation 325, the processing device determine a write-to-read (W2R) delay between the written timestamp and the original read operation at block 310 using a current time of the initial read operation and a write timestamp stored in connection with the unit of data at operation 350. At operation 360, the processing device determines whether such BER condition or ERF condition is expected for this W2R delay. Responsive to the BER condition or the ERF condition being expected at operation 360, the processing device completes the read operation at operation 360. Responsive to the BER condition or the ERF condition not being expected at operation 360, the processing device initiates a defect test routine at operation 370.[0049] For example, the processing device at operation 360 determines whether the BER, corresponding to the read operation, is expected at this given W2R delay. Responsive to the given W2R delay is not within the range of W2R delays for the initial read voltage level at operation 360, the read operation is completed at operation 330. Responsive to the given W2R delay is within the range of W2R delays for the initial read voltage level at operation 360, the processing device initiates the defect test routine at operation 370. In particular, when the BER for the read operation is higher than a range of BER corresponding to a range of W2R delays specified for the specified read voltage level (i.e., a range of acceptable BER for a range of W2R delays as the BER threshold criterion) and the given W2R delay is within the range of W2R delays for the initial read voltage level, the defect test routine is initiated at operation 370.[0050] For another example, the processing device at operation 360 determines whether a re-read operation is performed in the ERF at operation 340. The re-read operation is performed with a different read voltage level than the initial read voltage level used by the read operation at operation 310 before the ERF is performed. Responsive to the re-read operation being performed on the read operation and the given W2R delay is within the range of W2R delays for the initial read voltage level, the processing device initiates the defect test routine at operation 370.[0051] Responsive to the BER not satisfying the BER threshold criterion at operation 325, the processing device completes the read operation at operation 330.[0052] When the processing device determines that the unit of data from the read operation is not successfully decoded because of an error at operation 320, the processing device performs an error recovery flow (ERF) to recover the unit of data at operation 340. In some embodiments, during the ERF the processing device issues one or more re-read operations with one or more read voltage levels that are different from the specified read voltage level. After the ERF is performed at operation 340, at operation 350, the processing device determines the W2R delay for the read operation at operation 310 using a current time of the initial read operation and a write timestamp stored in connection with the unit of data when the unit of data was written. It should be noted that the W2R delay is between the written timestamp and the initial read at operation 310 and not any re-reads performed during the ERF at operation 340. As described above, at operation 360, the processing device determines whether the W2R delay is within a range of W2R delays specified for the specified read voltage level. Responsive to the W2R delay not being within the range of W2R delays at operation 360, the read operation is completed at operation 330. Responsive to the W2R delay being within the range of W2R delays at operation 360, the processing device initiates a defect test routine at operation 370. In particular, when the W2R delay for the read operation is within the range of W2R delays specified for the specified read voltage level and an ERF is performed at block 340, the defect test routine is initiated.[0053] In another embodiment, the processing device obtains a write timestamp and issues a write operation to write the data and the write timestamp as the unit of data in the memory component. The processing device can obtain and write timestamps for each unit of data being written to the memory component. The write timestamp can be used to calculate W2R delays and can check the calculated W2R delays against a corresponding range for the default read voltage levels.[0054] In another embodiment, the processing device also obtains temperature or other measurements at the time of the write operation and stores the temperature or other measurements as metadata in connection with the data. For example, a unit of data stores the data, the write timestamp and the temperature at the time the data is written to the memory component.[0055] In one embodiment, the processing device determines whether a bit error rate (BER) satisfies a threshold criterion when unit of data is read by the read operation with the specified read voltage level. The processing device initiates the defect test routine responsive to the BER satisfying the threshold criterion and the W2R delay being within the range of W2R delays specified for the specified read voltage level.[0056] In another embodiment, the processing device determines whether the re-read operation is performed in the ERF. The processing device initiates the defect test routine responsive to the re-read operation being performed in the ERF and the W2R delay being within the range of W2R delays specified for the specified read voltage level.[0057] In another embodiment, the processing device determines both whether the BER of the initial read operation satisfies the threshold criterion and whether the re-read operation is performed in the ERF. The processing device initiates the defect test routine responsive to both conditions being met. In particular, the processing device determines whether a BER of the initial read operation satisfies a threshold criterion when unit of data is read by the read operation with the specified read voltage level. The processing device determines whether the re-read operation is performed in the ERF. The processing device initiates the defect test routine responsive to the BER satisfying the threshold criterion, the re-read operation being perfonned in the ERF, and the W2R delay being within the range of W2R delays specified for the specified read voltage level. In other embodiments, additional checks can be made against other metadata values stored in connection with the unit of data. For example, when a write temperature value is written in connection with the unit of data, the processing device can determine whether defect test routine should be performed or not based on considering both the W2R delay and the current/write temperature information.[0058] In another embodiment, the processing device detects one or more errors in the unit of data read from the memory component using an initial read voltage level. In response to detection of one or more errors in the unit of data, the processing device performs the re read operation with the different read voltage level to recover the unit of data as part of the ERF. The processing device initiates the defect test routine after the unit of data is recovered.[0059] FIG. 4A is a graph 400 that illustrates BER as a function of W2R delays for three read voltage levels a read voltage level in accordance with some embodiments of the present disclosure. As described herein, Vt distributions can move as a function of time. For example, with a same read level, such as a second read level (labeled Read level 2) corresponding to an initial read voltage level (also referred to as a default read level), if the Vt distributions move, the bit error rate for this read voltage level changes as a function of time. Similarly, if Vt distributions move for a first read level, the bit error rate for this read voltage level changes as a function of time. Similarly, if Vt distributions move for a third read level, the bit error rate for this read voltage level changes as a function of time. The Vt distribution and bit error rate can be a function of W2R delay. Graph 400 shows a bit error rate curve 402 as a function of W2R delay corresponding to the second read level, a bit error rate curve 404 as a function of W2R delay corresponding to the first read level, and a bit error rate curve 406 as a function of W2R delay corresponding to the third read level. Due to the time-varying nature of BER, the single read level (default read level) is not sufficient to achieve best memory read BER for system reliability targets. For example, a single read level, e.g., read level 1, achieves low BER at short W2R delay but BER is high at high delay. As such, multiple read levels, such as the three read levels shown in FIG. 4A, are used to achieve low BER over a larger range of W2R delay. Using the embodiments described herein, the W2R delay can be measured using the write timestamp and a current time of the initial read operation to determine whether the measured W2R delay is within a range specified for a particular read level as shown and described with respect to FIG. 4B.[0060] FIG. 4B is a graph 420 that illustrates a W2R delay range 408 for a default read level for one of three read voltage levels of FIG. 4A in accordance with some embodiments of the present disclosure. If a read is performed at a certain W2R delay within 408, it is expected that a good BER should be achieved for this read. As described herein, every write unit of data is written to memory with a write timestamp when the write unit is written. Each read operation starts with a default read level. When there are uncorrectable errors then an error recovery flow is performed. During the error recovery flow, one or more re-read operations are performed with read levels that are different than the default read level. For example, as illustrated in FIG. 4B, the default read level is the second read level. The second read level has a bit error rate curve 402 as a function of W2R delay. If it is determined that the decoder statistics, such as BER, are high at the default read level, or if a re-read operation is triggered with a different read level, the processing device performs a check on the following criterion after the data and the corresponding write timestamp are recovered (i.e., successfully decoded with initial read or using the different read level in ERF). The check includes measuring a W2R delay for the initial read operation by taking a difference between a current time of the initial read operation and the write timestamp and comparing the W2R delay against a W2R delay range 408 specified for the default read level. If the W2R delay for the initial read operation falls in the W2R delay range 408, a defect test routine is triggered; otherwise, the defect test routine is not triggered. It should be noted that the W2R delay is measured for the initial read operation, not any re-read operations as part of the ERF.[0061] In one embodiment, the processing device implements this check in a hardware circuit, including logic circuitry with at least one input being whether the W2R delay is within the W2R delay range 408. The logic circuitry can output an interrupt signal that causes the processing device to perform the defect test routine. In another embodiment, the processing device implements this check in firmware. The firmware calculates the W2R delay and determines if the W2R delay is within the W2R delay range 408. The firmware can initiate the defect test routine accordingly. In another embodiment, the processing device implements this check as a software routine that is executed in connection with read operations.[0062] In another embodiment, the processing device can specify a range for each of the multiple read thresholds. In that manner, if the first read level is considered the default read level for the initial read operation, there can be a corresponding W2R delay range for the first read level. Similarly, if the third read level is considered the default read level for the initial read operation, there can be a corresponding W2R delay range for the third read level. It should also be noted that the processing device can include more or less read levels than three and there can be W2R delays for one or more of these multiple read levels. [0063] In another embodiment, the write timestamp can be embedded with the data during memory write operations, and after each read operation with ERF, the processing device can determine whether to trigger the defect detection operation based on the combination of its W2R delay and other statistics, such as decoding history statistics (BER) of this data unit. In other embodiments, additional metadata can be stored along with the write timestamp. The additional metadata can impact BER, for example, and the additional metadata can be used in the check to determine whether to check for defects based on the different combinations of statistics, the additional metadata, and the write timestamp.[0064] The embodiments described herein provide on-demand criterion that applies to every read operation. The embodiments effectively detect abnormal characteristics, such as high read bit error rate (RBER) events, and trigger defect detection responsive to the write timestamp falling within a specified range specified for a read voltage level used for the initial read operation. The embodiments can minimize false alarms and can reduce performance penalties caused by conventional defect management algorithms. The embodiments of the trigger criterion described herein can be simple and can be implemented in hardware without impacting system performance.[0065] FIG. 5 is a block diagram of a hardware circuit 500 that triggers a defect detection operation in a central processing unit (CPU) 510 of a memory system in accordance with some embodiments of the present disclosure. The hardware circuit 500 includes first comparison circuitry 502, second comparison circuitry 504, and logic circuitry 506. The first comparison circuitry 502 can receive as inputs a first signal 512, indicative of a first statistic, such as BER or RBER, and a second signal 514, indicative of a first threshold, such as BER or RBER threshold. The first comparison circuitry 502 can include one or more comparators to compare the inputs. The first comparison circuitry 502 compares the inputs to generate a first output signal 522, indicative of an abnormal condition, such as high BER. The second comparison circuitry 504 can receive as inputs a third signal 516, indicative of a first timing statistic, such as W2R delay, a fourth signal 518, indicative of a lower threshold of a range, such as W2R delay lower threshold, and a fifth signal 520, indicative of an upper threshold of the range, such as W2R delay upper threshold. The second comparison circuitry 504 can include one or more comparators to compare the inputs. The second comparison circuitry 504 compares the inputs to generate a second output signal 524, indicative of the third signal 516 being within the range, such as within the W2R delay range. Logic circuitry 506 can receive the first output signal 522 and the second output signal 524, and based on the particular function of the logic circuitry, such an AND function, outputs an interrupt 526 to the CPU 510. The interrupt 526 can indicate that a defect detection operation should be performed.[0066] In one embodiment, the interrupt 526 is the result of a BER for a read operation satisfies a BER threshold criterion and the W2R delay (based on the write timestamp) is within a W2R delay range. The hardware circuit 500 can include different logic and circuit components to determine the conditions for triggering the defect detection operation. For example, the inputs can include the write timestamp and a current time of the initial read operation to calculate the W2R delay before being compared against the W2R delay range. In other embodiments, the inputs can include other metadata such as temperature at the time the write unit is written to the memory component. Although the logic circuitry 506 is illustrated as a single AND gate in FIG. 5, in other embodiments, the logic circuitry 506 can include one or more logic gates that define a function to determine whether the defect detection operation is triggered or not. Also, as described herein, the functionality of the hardware circuit 500 can be implemented in firmware or software.[0067] In another embodiment, similar comparison and logic circuitry could be used to detect the ERF condition and to generate an interrupt to the CPU 510 when the ERF condition is detected. Similarly, other comparison and logic circuitry could be used to detect other conditions as a function of the W2R delay and generate an interrupt to the CPU 510 when the other condition is detected.[0068] FIG. 6 illustrates an example machine of a computer system 600 within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, can be executed. In some embodiments, the computer system 600 can correspond to a host system (e.g., the host system 120 of FIG. 1) that includes, is coupled to, or utilizes a memory sub-system (e.g., the memory sub-system 110 of FIG. 1) or can be used to perform the operations of a controller (e.g., to execute an operating system to perform operations corresponding to the defect detection component 113 of FIG. 1). In alternative embodiments, the machine can be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, and/or the Internet. The machine can operate in the capacity of a server or a client machine in client-server network environment, as a peer machine in a peer- to-peer (or distributed) network environment, or as a server or a client machine in a cloud computing infrastructure or environment.[0069] The machine can be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, a switch or bridge, or any machine capable of executing a set of instructions(sequential or otherwise) that specify actions to be taken by that machine. Further, while a single machine is illustrated, the term“machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.[0070] The example computer system 600 includes a processing device 602, a main memory 604 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), a static memory 606 (e.g., flash memory, static random access memory (SRAM), etc.), and a data storage system 618, which communicate with each other via a bus 630.[0071] Processing device 602 represents one or more general-purpose processing devices such as a microprocessor, a central processing unit, or the like. More particularly, the processing device can be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device 602 can also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device 602 is configured to execute instructions 626 for performing the operations and steps discussed herein. The computer system 600 can further include a network interface device 608 to communicate over the network 620.[0072] The data storage system 618 can include a machine-readable storage medium 624 (also known as a computer-readable storage medium) on which is stored one or more sets of instructions 626 or software embodying any one or more of the methodologies or functions described herein. The instructions 626 can also reside, completely or at least partially, within the main memory 604 and/or within the processing device 602 during execution thereof by the computer system 600, the main memory 604 and the processing device 602 also constituting machine-readable storage media. The machine-readable storage medium 624, data storage system 618, and/or main memory 604 can correspond to the memory sub-system 110 of FIG. 1.[0073] In one embodiment, the instructions 626 include instructions to implement functionality corresponding to an ERF component (e.g., the defect detection component 113 of FIG. 1). While the machine-readable storage medium 624 is shown in an example embodiment to be a single medium, the term“machine-readable storage medium” or “computer-readable storage medium” should be taken to include a single medium or multiple media that store the one or more sets of instructions. The term“machine-readable storage medium” shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure. The term“machine-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media.[0074] Some portions of the preceding detailed descriptions have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the ways used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.[0075] It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. The present disclosure can refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage systems.[0076] The present disclosure also relates to an apparatus for performing the operations herein. This apparatus can be specially constructed for the intended purposes, or it can include a general- purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program can be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus.[0077] The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems can be used with programs in accordance with the teachings herein, or it can prove convenient to construct a more specialized apparatus to perform the method. The structure for a variety of these systems will appear as set forth in the description below. In addition, the present disclosure is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages can be used to implement the teachings of the disclosure as described herein.[0078] The present disclosure can be provided as a computer program product, or software, that can include a machine-readable medium having stored thereon instructions, which can be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g., a computer). In some embodiments, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium such as a read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory components, etc.[0079] In the foregoing specification, embodiments of the disclosure have been described with reference to specific example embodiments thereof. It will be evident that various modifications can be made thereto without departing from the broader spirit and scope of embodiments of the disclosure as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense. |
To provide an integrated circuit preventing localized power supply voltage drops and having improved power grid architectures, an integrated circuit module, and a method.SOLUTION: There is provided an improved power-grid tier design processing method in which a hard macro (circuit module) receives multiple power grid tier allocations. In the processing method, a placement and a route for the hard macro containing multiple tiles are modified so that some of the circuit tiles are allocated with a more robust power grid tier and the other multiple tiles of the circuit tiles are allocated with a less robust power grid tier.SELECTED DRAWING: Figure 1 |
An integrated circuit module, a first plurality of tiles, a second plurality of tiles, wherein the integrated circuit module occupies a footprint on a semiconductor die, and the footprint is the first. Among the second plurality of tiles, the first power grid hierarchy for each tile in the first plurality of tiles, and each tile in the second plurality of tiles. Second power grid hierarchy, and here the first power grid hierarchy, with respect to vias extending between the metal layers defining the power rails and ground rails for the integrated circuit module. An integrated circuit that has a higher via density than.The first aspect of claim 1, wherein the plurality of power rails and ground rails for the first power grid hierarchy have a width greater than the width for the plurality of power rails and ground rails for the second power grid hierarchy. Integrated circuit module.The first aspect of claim 1, wherein the plurality of power rails and ground rails for the first power grid hierarchy have a pitch smaller than the pitch for the plurality of power rails and ground rails for the second power grid hierarchy. Integrated circuit module.The integrated circuit module according to claim 1, wherein the integrated circuit module includes a single hard macro.The integrated circuit module according to claim 1, wherein the first power grid layer comprises a plurality of power grid layers having different via densities.The integrated circuit module according to claim 1, wherein the first power grid layer includes a larger number of power switches than the second power grid layer.The integrated circuit module of claim 4, wherein most of the single hard macros have a power grid hierarchy with the lowest density vias.A method of physical design for a hard macro relating to an integrated circuit, wherein the hard macro comprises a plurality of tiles, wherein the method comprises a first subset of the tiles in the plurality of tiles as a first power grid. Of the tiles in the first subset of tiles that have power supply voltage drop regions (hotspots) after clock tree synthesis during allocation to the hierarchy and placement and route stages for the hard macros. Identifying the first plurality of tiles and declustering the clock driver for the first tile such that each first tile has a first modified hotspot, herein. The first modified hotspot is a plurality of tiles of the first tile having a power supply voltage drop that is smaller than the hotspot and greater than the threshold percentage of the power supply voltage for the hard macro. And a second with an increased number of power switches compared to said first power grid hierarchy to form a second tile, each with a second modified hotspot. Adjusting the identified first tile to have a power grid hierarchy of, where the second modified hotspot is smaller than the first modified hotspot. How to prepare.To identify a plurality of tiles of the second tile having a power supply voltage drop greater than the first threshold and to form a third tile with a third modified hotspot. The identified second tile to have a third power grid layer having a greater width with respect to its power and ground rails in the set of lower metal layers as compared to the second power grid layer. The method of claim 8, further comprising adjusting, wherein the third modified hotspot is smaller than the second modified hotspot.Identifying a plurality of tiles of the third tile having a power supply voltage drop greater than the second threshold smaller than the first threshold and a fourth with a fourth modified hotspot. The identification to have a fourth power grid hierarchy with a larger width with respect to its power and ground rails in the set of upper metal layers compared to the third power grid hierarchy to form the tiles of. The method according to claim 9, wherein the third tile is adjusted.The method of claim 8, further comprising performing parasitic resistance and capacitance extraction on the hard macro followed by timing analysis.The method of claim 8, further comprising performing the final power supply voltage drop analysis on the hard macro.Identifying the first tile having the hotspot comprises identifying the first tile having a power supply voltage drop greater than 10% of the power supply voltage for the hard macro. Item 8. The method according to Item 8.The method of claim 9, wherein the first threshold is about 10 mV.The method of claim 10, wherein the second threshold is about 5 mV.The method of claim 9, wherein the set of low metal layers comprises a lowest first metal layer to a fourth metal layer.10. The method of claim 10, wherein the set of upper metal layers comprises a fifth metal layer to an uppermost eighth metal layer.An integrated circuit module with at least one critical path, a non-critical path portion, a first power grid hierarchy for the at least one critical path, and a second power grid for the non-critical path portion. Both the first power grid tier and the second power grid tier include power rails and ground rails defined in a plurality of metal layers and the power rails in the first power grid tier. And the integrated circuit module, the width for the ground rail is greater than the width for the power rail and the ground rail in the second power grid hierarchy.The integrated circuit module according to claim 18, wherein the integrated circuit module includes a single hard macro.The integrated circuit module according to claim 18, wherein the second power grid layer includes a plurality of power grid layers. |
Adaptive multi-tier power distribution grid for integrated circuits[0001] This application claims the benefit of US Provisional Patent Application No. 62 / 424,289 filed on November 18, 2016, filed on February 14, 2017, US Patent Application No. 15 / 432. Claim priority over 431.[0002] The present application relates to power distribution for integrated circuits, and more specifically to adaptive multi-tier power distribution grids for integrated circuits.[0003] Power distribution is an important factor in integrated circuit design. For example, microprocessor integrated circuits such as system-on-chip (SoC) include a large number of transistors that can be shifted from idle to active switching. The sudden transition of a large number of transistors to the active state fluctuates the power supply voltage to the transistors. Due to such fluctuations, the system may be reset or experience an error if the power supply voltage drop falls below the minimum requirement. The resistance of the power grid that provides the power supply voltage is an important factor in minimizing the voltage drop in response to a sudden start-up of the circuit module. For example, the number of vias (via density) from the power rails in the circuit module to the various transistors can increase as compared to other modules depending on the power requirements. In addition, the width and density of the power rails can be increased. Similarly, the number of head switches that couple rails in one power area to the main power rail can vary depending on the power requirements of a given circuit module. Ultimately, the number and density of decoupling capacitors that support power rails in a given power region can also vary.[0004] Therefore, it has been conventionally designed to design the SoC so as to include a plurality of power-grid tiers. Each layer corresponds to a particular set of power grid elements, such as via density, power rail width and density, headswitch density, and decoupling capacitor density elements. These power grid elements will be better understood with reference to the processing flow for conventional physical design (PD) of integrated circuits as shown in FIG. The processing is a netlist, UPF (unified) desired to execute a robust power grid plan in which logical functions for various hard macros (circuit modules) are assigned to a given power grid hierarchy based on the inputs. It starts with a block floor plan flow 100 that receives various inputs such as power format), timing constraints, multi-voltage island constraints, and pin preferences. Power grid planning is `` in that a given hard macro is assigned to the corresponding power grid hierarchy and therefore the resulting voltage rail has the same via density and other power grid hierarchy elements throughout the hard macro. It is considered to be "robust". With the power grid hierarchy assigned, traditional cell placement (cell placement), clock tree synthesis, routing, and finishing (design for manufacturization (DFM)) and design-for- A placement and route step 105 is performed, including the manufacturing) substep, which continues with the resistance and electrostatic (RC) extraction step 115, followed by the timing, noise, and power analysis 120. Finally, the design is subjected to a current * resistance (IR) drop analysis 125, which determines whether the hard macro has a region where the power supply voltage drops undesirably. If not, the power grid planning step 100, the placement and route step 105, the RC extraction step 110, and the timing, noise, and power analysis step 120 are to adapt to the required design modifications through the Design Change Instruction (ECO). Repeated as needed.[0005] Traditional SoC design processes must also meet density reduction and related cost issues. Therefore, it is very difficult to assign an appropriate power grid hierarchy to a given circuit module. If the power grid hierarchy is too robust with respect to the power requirements of the corresponding circuit module, the density will be compromised. Conversely, if the power grid hierarchy is improper, the circuit module may be reset and / or not function well due to the improper power supply voltage. In addition, factors such as non-linear resistance scaling, lack of on-chip resources, increased performance requirements, density, and routerability complicate power grid design. For example, FIG. 2 illustrates power supply voltage drops (IR drops) for a conventional hard macro designed according to the processing flow described with respect to FIG. In this example, the third layer power grid (PG3) is selected for the entire hard macro. This design results in a variety of clusters 200 of clock (CLK) drivers with high drive strength near the critical path, which causes unwanted localized power supply voltage drops. However, a significant portion of the hard macro uses PG3, such as region 205, which lowers routerability and increases cost.[0006] Therefore, there is a need for technology on improved power grid architectures for integrated circuits.[0007] An improved power grid hierarchy design process is provided in which the hard macro receives multiple power grid hierarchy assignments. As used herein, the term "hard macro" is a well-routed design that is ready to be implemented in the semiconductor masking step during the manufacture of semiconductor dies containing circuit modules implemented through hard macros. Point to. Hard macros occupy the entire footprint on the semiconductor die. This footprint includes multiple circuit tiles, where each tile occupies a certain amount of die space within that footprint. Some of the tiles, such as those that show examples of critical paths for hard macros, can be assigned a more robust power grid hierarchy, while the rest of the tiles in the hard macros. (Remainder) receives a less robust power grid hierarchy in response to their expected power supply voltage drop. In particular, if it is determined that a tile has too high a power supply voltage drop given a less robust power grid hierarchy, the tile will be assigned a more robust power grid hierarchy. In this method, the problem of density caused by the conventional fixed power grid allocation to the hard macro, as well as the problem of locally reduced power supply voltage, is solved.[0008]These and additional benefits can be better understood through the detailed description below.[0009] FIG. 1 is a flowchart relating to a conventional physical design process. [0010] FIG. 2 illustrates a floor plan for a hard macro designed according to the process of FIG. [0011] FIG. 3 is a flow chart relating to a physical design process that provides adaptive power grid hierarchy allocation for hard macros according to aspects of the present disclosure. [0012] FIG. 4 is a floor plan for the hard macro of FIG. 2, designed according to the process of FIG. [0013] FIG. 5A is a plan view of the via densities of the metal layers M1 to M4 with respect to the power grid layers PG2 and PG3 for a portion of the hard macro. [0014] FIG. 5B is a plan view of the hard macro portion of FIG. 5A after the upgrade of the power grid hierarchy to PG4. [0015] FIG. 6 is a flow chart relating to an exemplary method of allocating power grid hierarchies for hard macros according to aspects of the present disclosure.[0016] Embodiments of the invention and their advantages are best understood by reference to the detailed description below. It should be understood that similar reference numbers are used to identify similar elements illustrated in one or more of the drawings.[0017] To accommodate local areas of power supply voltage drops caused by resistance loss (current * resistance (IR)) in circuit elements such as clock drivers, and additional power grid resources for the die region. To provide, during the design phase, power grid planning for hard macros is relaxed, providing an adaptive multi-tier power grid for integrated circuits. The hard macro occupies a certain amount of die space on the semiconductor die as described herein as its footprint. Depending on the functional aspect of the device forming the hard macro, the footprint is divided into multiple tiles. The size of the tiles can vary depending on the needs of the corresponding functionality they are implementing. As used herein, areas of significant local power supply voltage drops in tiles are referred to as "hot spots." Power grid planning mitigation allows individual hard macros to contain multiple power grid hierarchies so that different power grid hierarchies are assigned different tiles in the footprint. These tiles with a relatively small local power supply voltage drop are assigned a less robust power grid hierarchy. Conversely, tiles with a more pronounced power supply voltage drop are assigned a more robust power grid hierarchy. In this method, power grid allocations may allow tiles that incorporate the critical path in the hard macro to receive a more robust power grid hierarchy, while non-critical tiles may receive a less robust power grid hierarchy. It is optimized in that. It solves the problem of separate hard macros with both improper power grid hierarchies in certain areas and overly robust power grid hierarchies in other areas.[0018] As used herein, the power grid hierarchy is the number of vias (via density) from the power rails in the circuit module to the various transistors, the width and density of the power rails for the circuit module, and for the circuit module. Refers to a specific allocation to each of the factors, such as the number of headswitches that couple the power region of the main to the main power rail, and the number and density of decoupling capacitors that support power transfer by the power rail for circuit modules. .. In particular, the power grid hierarchy refers to a particular allocation to at least one of these elements. One power grid tier can then be classified as more robust compared to another power grid tier if at least one of these is modified to produce a smaller power supply voltage drop. In general, the designer may choose from a plurality of power grid hierarchies, ranging from the lowest tier where the element has their lowest value to the highest tier where the element has their highest value.[0019] The critical pattern for hard macros is physical to solve the problem of separate hard macros with both improper power grid hierarchies in certain areas and overly robust power grid hierarchies in other areas. Identified during placement and route steps during the design process. Critical patterns can then be assigned a more robust power grid hierarchy. The default state for the rest of the tiles that form the hard macro is a more relaxed power grid hierarchy, which results in a higher density. With the critical path identified, the clock driver should be used during the deployment and root stage with no oversized, unnecessarily large clock drivers that are not oversized and are not unnecessarily large. , Can be de-clustered during deployment and root steps. This declustering of the clock driver improves the IR drop (local hotspot) problem.[0020] An exemplary physical design flow 300 for obtaining the results of these benefits is shown in FIG. Processing from power grid planning stage 305 according to conventional netlist, UPF (unified power format), timing constraints, multi-voltage (MV) island constraints, and pin preference constraints as described for stage 100 in FIG. Begins. However, stage 305 is more relaxed compared to stage 100 in that stage 305 includes adaptive allocation of power grid hierarchies for a given hard macro through identification of critical patterns. Thus, the critical pattern is assigned a more robust power grid hierarchy, while the tiles that form the rest of the hard macro are assigned a less robust power grid hierarchy. As mentioned earlier, each power grid hierarchy includes via size, enclosure, via pitch and density, power and ground rail width and pitch, power switch density and pitch, and decoupling capacitors. Includes specific assignments of density.[0021] Subsequent placement and root stage 310 include conventional cell placement and clock tree synthesis. However, these traditional analyses, after clock tree synthesis and timing optimization, include power supply voltage drop (IR) hotspot analysis and power grid adjustment step 330, including identification of IR bottlenecks (hotspots). Followed by. For example, hotspots are identified through a crossed power supply voltage drop threshold in a design simulation. Clock drivers for tiles with hotspots are then declustered at stage 335. If there is a certain residual concentration in the hotspot area of the power supply voltage droop (eg, greater than 10 percent of the power supply voltage), then the power grid hierarchy for the affected tile is power. It can be adjusted in step 340 by assigning a power grid hierarchy with an increased density of switches. If the resulting power supply voltage drop within the hotspot tile causes some deviation from the threshold of VDD, such as a deviation greater than 10 mV, the power for the affected hotspot tile. The grid hierarchy can be further tuned by assigning a power grid hierarchy with more robust low metal layer pitch and width for power and ground (PG) rails in operation 345. In that regard, semiconductor processing typically provides multiple metal layers that are ranging from the lowest metal layer in the vicinity of the semiconductor die to the top metal layer farthest from the semiconductor die. Operation 345 is directed to reducing the pitch and increasing the width for power and ground (PG) rails in the lower metal layer. If the resulting power supply voltage drop within the hotspot tile still has a threshold deviation somewhat reduced from the power supply voltage compared to operation 345, such as a power supply voltage drop greater than 5 mV, the effect. The receiving tile may be assigned an even more robust power grid hierarchy with increased width and reduced top metal layer pitch for the PG rail in operation 350. For example, in an embodiment having eight metal layers, the four lowest metal layers can be affected by motion 345, while the four top metal layers can be affected by motion 350. Traditional routing and finishing operations can then be followed by any necessary opportunistic adjustment of the power grid hierarchy to complete the placement and route stage 310.[0022] The conventional RC extraction stage 320, timing, noise, and power analysis stage 325, and IR drop analysis 330 follow the placement and route stage 310. However, due to the placement and adjustment of the power grid hierarchy in the route stage 310, as well as the initial planning stage 305, no further design change instructions (ECO) are needed anymore (as the process begins anew in the power grid planning stage 305). Note that ECO from IR descent analysis 330). The adaptations resulting from the power grid hierarchy for the same hard macros used in FIG. 2 are shown in FIG. Local hotspots for tile 400 have been significantly reduced in size and assigned the most robust power grid hierarchy (PG4). The tile 405, which has a lower power supply voltage drop compared to the tile 400, is assigned the second highest power grid hierarchy (PG3). However, the majority of hard macros receive a more relaxed power grid hierarchy (PG2). In contrast, hard macros designed using the same traditional techniques described for FIG. 2 use more advanced power-grid tier PG3s across the entire hard macro. And it results in more deteriorating hotspots 200, further reducing the density. In contrast, the hard macro of FIG. 4 has improved metal layer utilization, higher density, and reduced manufacturing costs.[0023] Some exemplary power grid hierarchies are described here. In one embodiment, the power grid layers PG2 and PG3 share the same density of vias in the lower metal layers M1 to M4. For example, via density is shown in FIG. 5A for metal layers M1-M4 for hard macro tiles with either PG2 or PG3 power grid hierarchy allocation. The via 505 extends from the metal layer M3 to the metal layer M1 with respect to both the power rail VDD and the ground rail VSS and is surrounded by the metal shield 510. Similarly, the via 515 extends from the metal layer M4 to the metal layer M2 for both the power rail VDD and the ground rail VSS and is surrounded by a metal shield 520. When the same tile is upgraded to power grid hierarchy PG4, the densities of vias 505 and 515 are efficiently doubled (like the densities of the corresponding metal shields 510 and 520, respectively), as shown in FIG. 5B. Will be done. In this scheme, local hotspots for tiles can be reduced in size through the allocation of more robust power grid hierarchies. An exemplary method of allocating power grid hierarchies for hard macros is described here.[0024] FIG. 6 is a flow chart relating to a method of allocating power grid hierarchies for hard macros according to aspects of the present disclosure. The method comprises the operation 600 of allocating a first subset of tiles in a plurality of tiles for a hard macro to a first power grid hierarchy. The allocation of power grid hierarchies to critical patterns in operation 305 is an example of operation 600. In addition, the method is the first of the tiles in the first subset of tiles that have a power supply voltage drop region (hotspot) after clock tree synthesis during placement and root stage for hard macros. Identifying multiple tiles and declustering the clock driver for the first tile so that each first tile has a first modified hotspot, where the first. The modified hotspots include 605, which is smaller than the hotspots. An example of such identification for operation 605 occurs at the power grid planning stage 305 described with respect to FIG. Declustering the clock driver in a tile containing a hotspot, as described for step 335 of FIG. 3, is an example of operation 605. Finally, the method identifies multiple tiles of the first tile that have a power supply voltage drop greater than the threshold percentage of the power supply voltage for the hard macro, and a modified second hotspot. The first tile identified to have a second power grid hierarchy with an increased number of power switches compared to the first power grid hierarchy to form a second tile with. To tune and, here, the second modified hotspot comprises operation 610, which is smaller than the first modified hotspot. The addition of additional power switches to those tiles with significant power supply voltage drops, as described for step 340, is an example of operation 610.[0025] Those skilled in the art will rely on the particular application in the near future in the materials, equipment, configurations and methods of use of the devices of the present disclosure, without many modifications, replacements, and variations departing from their spirit and scope. And now you will understand what can be created for them. In this regard, the scope of the present disclosure should not be limited to those of the particular embodiments illustrated and described herein as they are merely some examples of them, but rather attached below. It should be in good agreement with the claims and their functional equivalents. |
A method used in forming a memory array and conductive through-array-vias (TAVs) comprises forming a stack comprising vertically-alternating insulative tiers and wordline tiers. A mask is formed comprising horizontally -elongated trench openings and operative TAV openings above the stack. Etching is conducted of unmasked portions of the stack through the trench and operative TAV openings in the mask to form horizontally-elongated trench openings in the stack and to form operative TAV openings in the stack. Conductive material is formed in the operative TAV openings in the stack to form individual operative TAVs in individual of the operative TAV openings in the stack. A wordline- intervening structure is formed in individual of the trench openings in the stack. |
CLAIMS:1. A method used in forming a memory array and conductive through-array-vias (TAVs), comprising:forming a stack comprising vertically-alternating insulative tiers and wordline tiers;forming a mask comprising horizontally-elongated trench openings and operative TAV openings above the stack;etching unmasked portions of the stack through the trench and operative TAV openings in the mask to form horizontally-elongated trench openings in the stack and to form operative TAV openings in the stack;forming conductive material in the operative TAV openings in the stack to form individual operative TAVs in individual of the operative TAV openings in the stack; andforming a wordline-intervening structure in individual of the trench openings in the stack.2. The method of claim 1 comprising forming channel-material strings through the insulative tiers and the wordline tiers before the etching.3. The method of claim 1 comprising forming channel-material strings through the insulative tiers and the wordline tiers after the etching.4. The method of claim 1 comprising forming the conductive material in the individual operative TAV openings in the stack before forming the wordline-intervening structures in the stack.5. The method of claim 1 comprising forming the wordline- intervening structures in the stack before forming the conductive material in the individual operative TAV openings in the stack.6. The method of claim 1 comprising:forming the mask to comprise dummy TAV openings;the etching also forming dummy TAV openings in the stack; and forming dummy material in individual of the dummy TAV openings in the stack.7. The method of claim 6 wherein,the dummy material comprises the conductive material; andthe forming the conductive material in the individual operative TAV openings in the stack and in the individual dummy TAV openings in the stack occurs at the same time.8. The method of claim 6 wherein,the dummy material does not comprise the conductive material; and the forming the conductive material in the individual operative TAV openings in the stack and the forming the dummy material in the individual dummy TAV openings in the stack occur at different time-spaced periods of time.9. The method of claim 8 comprising forming the conductive material in the individual operative TAV openings in the stack before forming the dummy material in the individual dummy TAV openings in the stack.10. The method of claim 1 comprising before forming the conductive material in the individual operative TAV openings in the stack and before forming the wordline-intervening structures in the stack, forming and removing sacrificial plugs in the individual operative TAV openings in the stack and in the individual trench openings in the stack.11. The method of claim 10 wherein the sacrificial plugs in the individual operative TAV openings in the stack and in the individual trench openings in the stack less-than-fill the individual operative TAV openings in the stack and less-than-fill the individual trench openings in the stack thereby comprising a void space below individual of the sacrificial plugs in the individual operative TAV openings and in the individual trench openings in the stack.12. The method of claim 10 wherein the sacrificial plugs in the individual operative TAV openings in the stack and in the individual trench openings in the stack completely fill the individual operative TAV openings in the stack and completely fill the individual trench openings in the stack.13. The method of claim 10 wherein the sacrificial plugs in the individual operative TAV openings in the stack and in the individual trench openings in the stack are formed at the same time and are removed at different time-spaced periods of time.14. The method of claim 10 comprising removing the sacrificial plugs from the individual operative TAV openings in the stack before removing the sacrificial plugs in the individual trench openings in the stack, the forming of the conductive material in the individual operative TAV openings in the stack occurring before removing the sacrificial plugs in the individual trench openings in the stack.15. The method of claim 1 wherein the stack comprises an uppermost conductor tier and further comprising:forming a step atop or above an uppermost of the insulative tiers on at least one side of individual wordlines, the wordline-intervening structure being atop the step.16. The method of claim 1 wherein the stack comprises an uppermost conductor tier and further comprising:forming the wordline-intervening structure to comprise opposing laterally-outer longitudinal edges, at least some of each of the opposing laterally-outer longitudinal edges above the uppermost conductor tier being less overall steep than the opposing laterally-outer longitudinal edges below the uppermost conductor tier.17. The method of claim 1 wherein the etching is conducted in a single etching step.18. A method used in forming a memory array and conductive through-array-vias (TAVs), comprising:forming a stack comprising an uppermost conductor tier andvertically-alternating insulative tiers and wordline tiers, the uppermost conductor tier and wordline tiers comprising a first material, the insulative tiers comprising a second material of different composition from that of the first material;
forming channel-material strings through the insulative tiers and the wordline tiers;forming a mask comprising horizontally-elongated trench openings and operative TAV openings above the stack;etching unmasked portions of the stack through the trench and operative TAV openings in the mask to form horizontally-elongated trench openings in the stack and to form operative TAV openings in the stack;forming conductive material in the operative TAV openings in the stack to form individual operative TAVs in individual of the operative TAV openings in the stack;removing the first material after forming the conductive material in the operative TAV openings in the stack to form wordline-tier voids and an uppermost-conductor-tier void;forming conducting material in the wordline-tier voids to comprise individual wordlines and in the uppermost-conductor-tier void; andafter forming the conducting material, forming a wordline-intervening structure in individual of the trench openings in the stack.19. The method of claim 18 wherein the forming of the conductive material in the individual operative TAV openings in the stack occurs while at least all of a lower half of individual of the trench openings in the stack is completely occluded.20. The method of claim 19 wherein the forming of the conductive material in the individual operative TAV openings in the stack occurs while all of individual of the trench openings in the stack are completely occluded.21. The method of claim 20 wherein the forming of the conductive material in the individual operative TAV openings in the stack occurs while less-than-all of the individual trench openings in the stack are completely filled with sacrificial material thereby comprising a void space below the sacrificial material in the individual trench openings in the stack.22. The method of claim 20 wherein the forming of the conductive material in the individual operative TAV openings in the stack occurs while all of the individual trench openings in the stack are completely filled with sacrificial material.23. A memory array comprising:a vertical stack comprising an uppermost insulating tier, an uppermost conductor tier below the insulating tier, and alternating insulative tiers and wordline tiers below the uppermost conductor tier, the wordline tiers comprising gate regions of individual memory cells, the gate regions individually comprising part of a wordline in individual of the wordline tiers;channel-material strings extending elevationally through theinsulative tiers and the wordline tiers;the individual memory cells comprising a memory structure laterally between individual of the gate regions and channel material of the channel- material strings;a wordline-intervening structure extending through the stack between immediately-adj acent of the wordlines; anda step atop or above an uppermost of the insulative tiers of the alternating insulative tiers and wordline tiers on at least one side of individual of the wordlines, the wordline-intervening structure being atop the step.24. The memory array of claim 23 wherein the step is atop the uppermost insulative tier of the alternating insulative tiers and wordline tiers and comprises insulative material of the uppermost insulative tier of the alternating insulative tiers and wordline tiers.25. The memory array of claim 24 wherein the step comprises an uppermost surface of said insulative material.26. The memory array of claim 23 wherein the step is above the uppermost insulative tier of the alternating insulative tiers and wordline tiers.27. The memory array of claim 26 wherein the step is above the uppermost conductor tier.28. The memory array of claim 27 wherein the step is within insulating material of the uppermost insulating tier.29. The memory array of claim 26 wherein the step is atop the uppermost conductor tier and comprises conducting material of the conductor tier.30. The memory array of claim 29 wherein the step comprises an uppermost surface of said conducting material.31. The memory array of claim 23 wherein the step is on only one side of the individual the wordlines.32. The memory array of claim 23 wherein the step is on both sides of the individual wordlines.33. The memory array of claim 23 wherein the step is horizontal.34. The memory array of claim 23 comprising forming CMOS- under-array circuitry.35. A memory array comprising:a vertical stack comprising an uppermost insulating tier, an uppermost conductor tier below the insulating tier, and alternating insulative tiers and wordline tiers below the uppermost conductor tier, the wordline tiers comprising gate regions of individual memory cells, the gate regions individually comprising part of a wordline in individual of the wordline tiers;channel-material strings extending elevationally through theinsulative tiers and the wordline tiers;the individual memory cells comprising a memory structure laterally between individual of the gate region and channel material of the channel- material strings;a wordline-intervening structure extending through the stack between immediately-adj acent of the wordlines; andthe wordline-intervening structure comprising opposing laterally- outer longitudinal edges, at least some of each of the opposing laterally- outer longitudinal edges above the uppermost conductor tier being less overall steep than the opposing laterally-outer longitudinal edges below the uppermost conductor tier.36. The memory array of claim 35 wherein said at least some have constant slope above the uppermost conductor tier.37. The memory array of claim 36 wherein all of each of the opposing laterally-outer longitudinal edges above the uppermost conductor tier have constant slope.38. The memory array of claim 35 wherein said at least some do not have constant slope above the uppermost conductor tier.39. The memory array of claim 38 wherein said at least some is curved. 40. The memory array of claim 35 wherein each of the opposing laterally-outer longitudinal edges on each side has a respective lowest location where steepness changes to a different and constant steepness below said lowest location, said lowest location on each side being at the same elevation relative one another. 41. The memory array of claim 35 wherein each of the opposing laterally-outer longitudinal edges on each side has a respective lowest location where steepness changes to a different and constant steepness below said lowest location, said lowest location on each side being at different elevations relative one another. 42. The memory array of claim 35 comprising forming CMOS- under-array circuitry. |
DESCRIPTIONMEMORY ARRAYS AND METHODS USED IN FORMING A MEMORY ARRAY AND CONDUCTIVE THROUGH-ARRAY- VIAS (TAVS)TECHNICAL FIELDEmbodiments disclosed herein pertain to memory arrays and to methods used in forming a memory array and conductive through-array-vias (TAVs).BACKGROUNDMemory is one type of integrated circuitry and is used in computer systems for storing data. Memory may be fabricated in one or more arrays of individual memory cells. Memory cells may be written to, or read from, using digit lines (which may also be referred to as bitlines, data lines, or sense lines) and access lines (which may also be referred to as wordlines) . The sense lines may conductively interconnect memory cells along columns of the array, and the access lines may conductively interconnect memory cells along rows of the array. Each memory cell may be uniquely addressed through the combination of a sense line and an access line.Memory cells may be volatile, semi-volatile, or non-volatile.Non-volatile memory cells can store data for extended periods of time in the absence of power. Non-volatile memory is conventionally specified to be memory having a retention time of at least about 10 years. Volatile memory dissipates and is therefore refreshed/rewritten to maintain data storage.Volatile memory may have a retention time of milliseconds or less.Regardless, memory cells are configured to retain or store memory in at least two different selectable states. In a binary system, the states are considered as either a“0” or a“1”. In other systems, at least some individual memory cells may be configured to store more than two levels or states of information.A field effect transistor is one type of electronic component that may be used in a memory cell. These transistors comprise a pair of conductive source/drain regions having a semiconductive channel region there-between. A conductive gate is adjacent the channel region and separated there-from by a thin gate insulator. Application of a suitable voltage to the gate allows
current to flow from one of the source/drain regions to the other through the channel region. When the voltage is removed from the gate, current is largely prevented from flowing through the channel region. Field effect transistors may also include additional structure, for example a reversibly programmable charge-storage region as part of the gate construction between the gate insulator and the conductive gate.Flash memory is one type of memory and has numerous uses in modern computers and devices. For instance, modern personal computers may have BIOS stored on a flash memory chip. As another example, it is becoming increasingly common for computers and other devices to utilize flash memory in solid state drives to replace conventional hard drives. As yet another example, flash memory is popular in wireless electronic devices because it enables manufacturers to support new communication protocols as they become standardized, and to provide the ability to remotely upgrade the devices for enhanced features.NAND may be a basic architecture of integrated flash memory. A NAND cell unit comprises at least one selecting device coupled in series to a serial combination of memory cells (with the serial combination commonly being referred to as a NAND string). NAND architecture may be configured in a three-dimensional arrangement comprising vertically-stacked memory cells individually comprising a reversibly programmable vertical transistor. Control or other circuitry may be formed below the vertically-stacked memory cells. Other volatile or non-volatile memory array architectures may also comprise vertically-stacked memory cells that individually comprise a transistor.BRIEF DESCRIPTION OF THE DRAWINGSFig. 1 is a diagrammatic cross-sectional view of a portion of a substrate in process in accordance with an embodiment of the invention and is taken through line 1 - 1 in Fig. 2.Fig. 2 is a diagrammatic cross-sectional view taken through line 2-2 in Fig. 1.Figs. 3-33 are diagrammatic sequential sectional and/or enlarged views of the construction of Fig. 1 in process in accordance with some embodiments of the invention.
Figs. 20A, 20B , 20C, 33A, 33B , 33C, and 34-42 are diagrammatic cross-sectional views of a portion of substrates in process in accordance with embodiments of the invention.DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTSEmbodiments of the invention encompass methods used in forming a memory array and conductive through-array-vias (TAVs), for example an array of NAND or other memory cells having peripheral control circuitry under the array (e.g. , CMOS-under-array) . Embodiments of the invention encompass so-called“gate-last” or“replacement-gate” processing, so-called “gate-first” processing, and other processing whether existing or future- developed independent of when transistor gates are formed. Embodiments of the invention also encompass a memory array (e.g., NAND architecture) independent of method of manufacture. First example method embodiments are described with reference to Figs. 1 -33 which may be considered as a “gate-last” or“replacement-gate” process.Figs. 1 and 2 show a construction 10 having an array or array area 12 in which elevationally-extending strings of transistors and/or memory cells will be formed. Construction 10 comprises a base substrate 1 1 having any one or more of conductive/conductor/conducting,semiconducti ve/semiconductor/semiconducting, orinsulative/insulator/insulating (i.e., electrically herein) materials. Various materials have been formed elevationally over base substrate 11. Materials may be aside, elevationally inward, or elevationally outward of the Figs. 1 and 2-depicted materials. For example, other partially or wholly fabricated components of integrated circuitry may be provided somewhere above, about, or within base substrate 11. Control and/or other peripheral circuitry for operating components within an array (e.g. , array 12) of elevationally- extending strings of memory cells may also be fabricated and may or may not be wholly or partially within an array or sub-array. Further, multiple sub-arrays may also be fabricated and operated independently, in tandem, or otherwise relative one another. In this document, a“sub-array” may also be considered as an array.Example construction 10 comprises a conductive tier 16 that has been formed above substrate 11. Example conductive tier 16 is shown as
comprising conductive material 17 (e.g. , conductively-doped semiconductive material such as conductively-doped polysilicon) above metal material 19 (e.g., WSix) . An etch-stop region 21 may be within conductive material 17. Region 21 may be conductive, insulative, or semiconductive, with elemental tungsten being an example, and may be sacrificial. Conductive tier 16 may comprise part of control circuitry (e.g. , peripheral-under-array circuitry and/or a common source line or plate) used to control read and write access to the transistors and/or memory cells that will be formed within array 12.A stack 18 has been formed above conductive tier 16. In some embodiments, stack 18 comprises an uppermost insulating tier 13, an uppermost conductor tier 15 below uppermost insulating tier 13, and alternating insulative tiers 20 and wordline tiers 22 below uppermost conductor tier 15. Example thickness for each of such tiers is 25 to 60 nanometers. Only a small number of tiers 20 and 22 is shown, with more likely stack 18 comprising dozens, a hundred or more, etc. of tiers 20 and 22. Other circuitry that may or may not be part of peripheral and/or control circuitry may be between conductive tiers 16 and stack 18. For example, multiple vertically alternating tiers of conductive material and insulative material of such circuitry may be below a lowest of wordline tiers 22 and/or above an uppermost of wordline tiers 22. For example, one or more select gate tiers (not shown) may be between conductive tier 16 and the lowest wordline tier 22 and one or more select gate tiers may be above anuppermost of wordline tiers 22. Regardless, uppermost conductor tier 15 may be a wordline tier or may not be a wordline tier. Regardless, wordline tiers 22 and uppermost conductor tier 15 may not comprise conductive material at this point in processing in conjunction with the herebyinitially-described example method embodiment which is“gate-last” or “replacement-gate”. Further, insulative tiers 20 and uppermost insulating tier 13 may not comprise insulative material or be insulative at this point in processing. Example wordline tiers 22 and uppermost conductor tier 15 comprise first material 26 (e.g. , silicon nitride) which may be wholly or partially sacrificial. Example insulative tiers 20 and uppermost insulating tier 13 comprise second material 24 (e.g., silicon dioxide) that is of different
composition from that of first material 26 and which may be wholly or partially sacrificial.Referring to Figs. 3 and 4, and in one embodiment, channelopenings 25 have been etched through insulative tiers 20 and wordline tiers 22 (and tiers 13 and 15) to material 17 of conductive tier 16. Channel openings 25 may go partially into material 17 as shown, may stop there-atop (not shown), or go completely there-through (not shown) either stopping on material 19 or going at least partially there-into. Alternately, as an example, channel openings 25 may stop atop or within lowest insulative tier 20. A reason for extending channel openings 25 at least to material 17 is to assure direct electrical coupling of subsequently-formed channel material (not yet shown) to conductive tier 16 without using alternative processing and structure to do so when such a connection is desired. Etch-stop material (not shown) may be within conductive material 17 to facilitate stopping of the etching of channel openings 25 to be atop conductive tier 16 when such is desired. Such etch-stop material may be sacrificial or non-sacrificial. By way example and for brevity only, channel openings 25 are shown as being arranged in groups or columns of staggered rows of four openings 25 per row. Any alternate existing or future-developed arrangement andconstruction may be used.Transistor channel material may be formed in the individual channel openings elevationally along the insulative tiers and the wordline tiers, thus comprising individual channel-material strings, which is directly electrically coupled with conductive material in the conductive tier. Individual memory cells of the example memory array being formed may comprise a gate region (e.g., a control-gate region) and a memory structure laterally between the gate region and the channel material. In one such embodiment, the memory structure is formed to comprise a charge-blocking region, storage material (e.g., charge-storage material), an insulative charge-passage material. The storage material (e.g. , floating gate material such as doped or undoped silicon or charge-trapping material such as silicon nitride, metal dots, etc.) of the individual memory cells is elevationally along individual of the charge-blocking regions. The insulative charge-passage material (e.g. , a band gap-engineered structure having nitrogen-containing material
[e.g., silicon nitride] sandwiched between two insulator oxides [e.g. , silicon dioxide] ) is laterally between the channel material and the storage material.Figs. 5 and 6 show one embodiment wherein charge-blocking material 30, storage material 32, and charge-passage material 34 have been formed in individual channel openings 25 elevationally along insulative tiers 20 and wordline tiers 22. Transistor materials 30, 32, and 34 (e.g. , memory cell materials) may be formed by, for example, deposition of respective thin layers thereof over stack 18 and within individual channel openings 25 followed by planarizing such back at least to an uppermost surface of stack 18. Channel material 36 has been formed in channel openings 25elevationally along insulative tiers 20 and wordline tiers 22, thus comprising individual channel-material strings 53. Example channel materials 36 include appropriately-doped crystalline semiconductor material, such as one or more silicon, germanium, and so-called III/V semiconductor materials (e.g., GaAs, InP, GaP, and GaN). Example thickness for each of materials 30, 32, 34, and 36 is 25 to 100 Angstroms. Punch etching may be conducted as shown to remove materials 30, 32, and 34 from the bases of channel openings 25 to expose conductive tier 16 such that channel material 36 is directly against conductive material 17 of conductive tier 16. Alternately, and by way of example only, no punch etching may be conducted and channel material 36 may be directly electrically coupled to material 17/19 by a separate conductive interconnect (not shown). Channel openings 25 are shown as comprising a radially-central solid dielectric material 38 (e.g. , spin-on-dielectric, silicon dioxide, and/or silicon nitride) . Alternately, and by way of example only, the radially-central portion within channel openings 25 may include void space(s) (not shown) and/or be devoid of solid material (not shown).Referring to Figs. 7-9, a mask 23 comprising masking material 27 (e.g., photoresist) has been formed above stack 18. Mask 23 comprises horizontally-elongated trench openings 28 and operative through-array-via (TAV) openings 31. In the context of this document, an“operative TAV opening” is an opening in which conductive material is or will be formed in the stack and which is an operating conductive interconnect between electronic components at different elevations in a finished construction of integrated circuitry that has been or is being fabricated. Immediately-
adj acent of horizontally-elongated trench openings 28 in mask 23 may comprise longitudinal shape of longitudinal outlines of individual wordlines to be formed in individual wordline tiers 22. Example operative TAV openings 31 are shown as being between trench openings 28 and thereby within longitudinal outlines of the individual wordlines and at an end of a grouping of channel openings 25. Alternate placement of operative TAV openings 31 may be used. For example, one or more operative TAV openings may be placed among a grouping of channel openings 25 and/or outside of immediately-adj acent trench openings 28 outside of any wordline outline.Referring to Figs. 10- 12, mask 23 has been used (e.g., as an etch mask) while etching unmasked portions of stack 18 through trench openings 28 and operative TAV openings 31 in mask 23 to form horizontally- elongated trench openings 40 in stack 18 and to form operative TAV openings 33 in stack 18. Ideally, at least TAV openings 33 extend at least to conductive tier 16. In one embodiment and as shown, the channel openings and the channel-material strings are formed through the insulative tiers and the wordline tiers before the etching exemplified by Figs. 10- 12. Alternately, such channel material openings and/or channel-material strings may be formed after such etching (not shown) . Regardless, openings 40 and 33 may be inwardly or outwardly tapered, with slight inward tapering being shown. Alternately, by way of example, all of the sidewalls of openings 40 and 33 may be vertical.In some embodiments, sacrificial plugs are formed in and removed from individual operative TAV openings 33 in stack 18 and in individual trench openings 40 in stack 18. Example such processing is next-described with reference to Figs. 13- 19.Referring to Fig. 13, mask 23 (not shown) has been removed.Sacrificial plugs 35 comprising material 37 have been formed in openings 33 and 40. Material 37 may be any of insulating, conductive, and/orsemiconductive, with an example being AI2O3. Material 26 in tiers 15 and 22 may be laterally recessed before forming plugs 35 (not shown).Regardless, and in one embodiment and as shown, such sacrificial plugs less-than-fill openings 33 and 40 thereby leaving or comprising a void space 39 in such openings below such plugs. Alternately, and by way of example
only, such sacrificial plugs could completely fill (not shown) the respective openings.Referring to Fig. 14, sacrificial masking material 41 (e.g., carbon) has been formed atop stack 18 and comprises openings 42 there-through to sacrificial plugs 35 in operative TAV openings 33 while leaving sacrificial plugs 35 in trench openings 40 covered.Referring to Fig. 15, exposed sacrificial plugs 35 in operative TAV openings 33 (not shown) have been removed followed by removing of sacrificial masking material 41 (not shown), leaving sacrificial plugs 35 in trench openings 40.Referring to Fig. 16, an insulative lining 43 (e.g. , silicon dioxide) has been formed within operative TAV openings 33.Referring to Fig. 17, insulative lining 43 has been subjected to a punch etch to expose conductive material 17 of conductive tier 16, followed by formation of conductive material 44 therein and planarizing such back at least to an elevationally outermost surface of uppermost insulating tier 13, thus forming an operative TAV 45 in individual operative TAV openings 33 in stack 18. In one embodiment and as shown, the forming of conductive material 44 in individual operative TAV openings 33 in stack 18 occurs while at least all of a lower half of individual trench openings 40 in stack 18 is completely occluded and, in one embodiment as shown, while all of individual trench openings 40 in stack 18 are completely occluded.Referring to Fig. 18, an insulator material 5 1 (e.g., silicon dioxide) has been formed atop stack 18 and thereby comprises a part of uppermost insulating tier 13. A masking material 46 (e.g. , carbon) has been formed thereover. Such has been formed to have mask openings 47 therein having a corresponding outline of trench openings 40 in stack 18. Openings 47 may be of the same lateral width (not shown) of or may be wider than (as shown) trench openings 40. Regardless and typically, openings 47 may be misaligned to at least one side relative to underlying trench openings 40 (misalignment to the right side being shown) .Referring to Figs. 19 and 20, and in one embodiment, masking material 46 (not shown) has been used as a mask while etching insulator material 5 1 through openings 47 (not shown), and in one embodiment into uppermost insulating tier 13, and masking material 46 (not shown) has
thereafter been removed as have sacrificial plugs 35 (not shown) from trench openings 40. In one example, etching may be conducted entirely through uppermost insulating tier 13 to material 26 of uppermost conductor tier 15.Figs. 20A, 20B , and 20C show alternate example constructions 10a, 10b, 10c, respectively. Like numerals from the above-describedembodiments have been used where appropriate, with some construction differences being indicated with the suffix“a”,“b”, and“c”, respectively. Construction 10a in Fig. 20A is like that of Fig. 20 but for showing unlikely perfect left-right mask alignment of masking-material openings 47(not shown) relative to trench openings 40. Fig. 20B shows a construction 10b having right mask misalignment the same as shown in Fig. 18, but where the subsequent etching has only occurred partially into uppermost insulating tier 13. Fig. 20C shows another alternate example construction 10c analogous to that of Fig. 20A where perfect left-right mask alignment of masking-material openings 47 (not shown) has occurred, and with only subsequent partial etching having been conducted into uppermost insulating tier 13 analogous to that shown in Fig. 20B .Referring to Figs. 21 and 22, material 26 (not shown) of wordline tiers 22 and uppermost conductor tier 15 has been removed, for example by etching such selectively relative to materials 24, 30, 32, 34, 36, and 38 (e.g., using liquid or vapor H3PO4 as a primary etchant where material 26 is silicon nitride and material 24 is silicon dioxide) . Such has formed wordline-tier voids 90 and an uppermost-conductor-tier void 92.Referring to Figs. 23-25, conducting material 48 has been formed through trenches 40 into the wordline-tier voids in wordline tiers 22 and into the uppermost-conductor-tier void in uppermost conductor tier 15. A thin insulating material liner (e.g., at least one of AI2O3 and HfOx, and not shown) may be formed prior to formation of conducting material 48.Regardless, any suitable conducting material 48 may be used, for example one or both of metal material and conductively-doped semiconductive material. In but one example embodiment, conducting material 48comprises a first-deposited conformal titanium nitride liner (not shown) followed by deposition of another composition metal material(e.g., elemental tungsten).
Referring to Figs. 26-29, conducting material 48 has been removed from individual trenches 40. Such has resulted in formation of wordlines 29 and elevationally-extending strings 49 of individual transistors and/or memory cells 56. Approximate locations of transistors and/or memory cells 56 are indicated with a bracket in Fig. 29 and some with dashed outlines in Figs. 26 and 28, with transistors and/or memory cells 56 being essentially ring-like or annular in the depicted example. Conducting material 48 may be laterally recessed relative to sidewalls of material 24 within trench openings 40 (not shown). Conducting material 48 may be considered as having terminal ends 50 (Fig. 29) corresponding to control-gate regions 52 of individual transistors and/or memory cells 56. Control-gate regions 52 in the depicted embodiment comprise individual portions of individual wordlines 29. Materials 30, 32, and 34 may be considered as a memory structure 65 that is laterally between control-gate region 52 and channel material 36.A charge-blocking region (e.g., charge-blocking material 30) is between storage material 32 and individual control-gate regions 52. A charge block may have the following functions in a memory cell: In a program mode, the charge block may prevent charge carriers from passing out of the storage material (e.g., floating-gate material, charge-trapping material, etc.) toward the control gate, and in an erase mode the charge block may prevent charge carriers from flowing into the storage material from the control gate. Accordingly, a charge block may function to block charge migration between the control-gate region and the storage material of individual memory cells. An example charge-blocking region as shown comprises insulator material 30. By way of further examples, a charge blocking region may comprise a laterally (e.g., radially) outer portion of the storage material (e.g. , material 32) where such storage material is insulative (e.g., in the absence of any different-composition material between an insulative storage material 32 and conducting material 48) . Regardless, as an additional example, an interface of a storage material and conductive material of a control gate may be sufficient to function as a charge-blocking region in the absence of any separate-composition-insulator material 30. Further, an interface of conducting material 48 with material 30 (when present) in combination with insulator material 30 may together function as
a charge-blocking region, and as alternately or additionally may a laterally- outer region of an insulative storage material (e.g. , a silicon nitride material 32). An example material 30 is one or more of silicon hafnium oxide and silicon dioxide.Referring to Figs. 30-33, a material 57 (dielectric and/orsilicon-containing such as undoped polysilicon) has been formed in individual trenches 40, thus forming a wordline-intervening structure 55 (a structure between immediately-adjacent wordlines) in individual trench openings 40 in stack 18.Any other attribute(s) or aspect(s) as shown and/or described herein with respect to other embodiments may be used with respect to the above- described embodiments.The above processing is but one example and wherein conductive material 44 in individual operative TAV openings 33 in stack 18 is formed before forming wordline-intervening structures 55 in stack 18. Alternately, this could be reversed (not shown). The above processing is also but one example wherein sacrificial plugs 35 are formed in individual operative TAV openings 33 and in individual trench openings 40 at the same time and yet are removed at different time-spaced periods of time. Such depicted processing is also but one example embodiment of removing sacrificial plugs 35 from individual operative TAV openings 33 before removing sacrificial plugs 35 that are in individual trench openings 40, with the forming of conductive material 44 in individual operative TAV openings 33 occurring before removing sacrificial plugs 55 that are in trench openings 40. Alternately, this could be reversed (not shown) . Any other attribute(s) or aspect(s) as shown and/or described herein with respect to other embodiments may be used.In one embodiment, the mask (e.g., 23) comprising thehorizontally-elongated trench openings (e.g., 28) and operative TAV openings (e.g., 31 ) may be formed to comprise dummy TAV openings. In the context of this document, a“dummy TAV opening” is an opening in which a dummy TAV is or will be formed in the stack, with a“dummy TAV” being a TAV in which no current ever flows there-through in a finished circuit construction and which may be a circuit inoperable dead end that is not part of a current flow path of a circuit even if extending to or from an
electronic component. As an example, one or more of the depicted TAV openings 31 in Figs. 7 and 8 could be a dummy TAV opening. Alternately, dummy TAV openings could be formed elsewhere among operative TAV openings and/or laterally outside of any wordline. Regardless, in such an embodiment, such etching of unmasked portions of stack 18 will then also be conducted through the dummy TAV openings thereby forming dummy TAV openings in stack 18. Dummy material is some time thereafter formed in individual of the dummy TAV openings in the stack. In this document, “dummy material” is a material in which no current ever flows there-through in the finished circuitry construction regardless of whether the dummy material is conductive, semiconductive, and/or insulative. In oneembodiment, such dummy material may comprise conductive material 44 which is formed in the individual dummy TAV openings at the same time conductive material 44 is formed in the operative TAV openings in the stack. Alternately and as an example, the dummy material may not comprise such conductive material 44, with the forming of conductive material 44 in operative TAV openings 33 and the forming of dummy material in the individual dummy TAV openings in the stack occurring at different time spaced periods of time. Either may be formed before the other, with in one embodiment conductive material 44 being formed in the individual operative TAV openings in stack 18 before forming the dummy material in the individual dummy TAV openings in the stack.In one embodiment, memory array 12 is formed to compriseCMOS-under-array circuitry.Some embodiments of the invention comprise forming a step atop or above an uppermost of the insulative tiers of the alternating insulative tiers and wordline tiers on at least one side of individual of the wordlines, with the wordline-intervening structure being atop such step. See for example Figs. 3 1 -33 processed in accordance with the above-described example embodiments. Such show formation of a step 59 (so designated only in Fig. 33 due to space constraint in Figs. 31 and 32) atop uppermost insulative tier 20 in stack 18, with step 59 comprising insulative material 24 of such uppermost insulative tier 20. Step 59 may be elevationally recessed into such insulative material (not shown) or may comprise an uppermost surface
of such insulative material as shown. Regardless, wordline-intervening structure 55 is atop step 59.Figs. 33A, 33B , and 33C show structures that may result from processing alternate constructions 10a, 10b, and 10c, respectively, as shown in Figs. 20A, 20B , and 20C, respectively, and having one or more respective steps 59. Accordingly, in some embodiments, a step 59 is above uppermost insulative tier 20 and in some such embodiments is above uppermost conductor tier 15. In some such embodiments, the step is within insulating material of uppermost insulating tier 13. In one embodiment, the step is on only one side of individual wordlines 29 and in another embodiment is on both sides of individual wordlines 29. In some embodiments, the step is horizontal. Any other attribute(s) or aspect(s) as shown and/or described herein with respect to other embodiments may be used.In some embodiments, the step is atop the uppermost conductor tier (not shown) and comprises conducting material of the conductor tier (not shown), for example as may occur in gate-first processing wherein sacrificial material 26 is not first-deposited. In such example embodiment, the step may comprise an uppermost surface of the conducting material of the conductor tier or may be recessed elevationally there-into. Any other attribute(s) or aspect(s) as shown and/or described herein with respect to other embodiments may be used.Some embodiments of the invention comprise forming thewordline-intervening structure to comprise opposing laterally-outer longitudinal edges at least some of each of which above the uppermost conductor tier are less overall steep than the opposing laterally-outer longitudinal edges below the uppermost conductor tier. A first example such embodiment is described with reference to Figs. 34 and 35 with respect to a construction l Od. Like numerals from the above-described embodiments have been used where appropriate, with some construction differences being indicated with the suffix“d” or with different numerals.Referring to Fig. 34, such shows a structure in processing sequence the same as that shown by Fig. 20 in the first-described embodiments.Tapered/sloped sidewalls of materials 5 1 and 24 of uppermost insulating tier 13 have been formed and which are less overall steep than below uppermost conductor tier 15. Such may result from using wider mask openings 47 (in
Fig. 18), misaligned to the right, and as an artifact of etching to form opening 40. Alternately, such can result from changing etching power and/or etching chemistry to introduce a degree of isotropy into the act of etching regardless of whether using wider mask openings than that shown in Fig. 18.Fig. 35 shows example subsequent processing having occurred through and according with that shown by Fig. 33 in the first-described embodiments yet whereby a wordline-intervening structure 55d has been formed. Such comprises opposing laterally-outer longitudinal edges 70. At least some of each of opposing laterally-outer longitudinal edges 70 above uppermost conductor tier 15 are less overall steep than the opposing laterally-outer longitudinal edges 70 below uppermost conductor tier 15.Any other attribute(s) or aspect(s) as shown and/or described herein with respect to other embodiments may be used.In one embodiment and as shown, the at least some (e.g. , all as shown) of each of opposing laterally-outer longitudinal edges 70 above uppermost conductor tier 15 has constant slope (rise over run) above uppermost conductor tier 15. Alternately, for example, at least some of each of opposing laterally-outer longitudinal edges 70 above uppermost conductor tier 15 may not have constant slope, for example as shown with respect to an alternate embodiment construction l Oh in Fig. 42. Like numerals from the above-described embodiments have been used where appropriate, with some construction differences being indicated with the suffix“h”. Such shows an example wherein each of opposing laterally-outer longitudinal edges 70 above uppermost conductor tier 15 are convexly curved relative to opening 40. Any other attribute(s) or aspect(s) as shown and/or described herein with respect to other embodiments may be used.Fig. 35 also shows an example embodiment wherein each of opposing laterally-outer longitudinal edges 70 on each side has a respective lowest location 75 where steepness changes to a different and constant steepness below lowest location 75, with lowest location 75 on each side being at different elevations relative one another (e.g., left-side location 75 is higher than right-side location 75). This may result from mask misalignment left or right of mask openings 47 in masking material 46 as exemplified in Fig. 18.
Any other attribute(s) or aspect(s) as shown and/or described herein with respect to other embodiments may be used.Figs. 36 and 37 show an alternate example construction lOe. Like numerals from the above-described embodiments have been used where appropriate, with some construction differences being indicated with the suffix“e”. Figs. 36 and 37 show perfect left-right mask alignment whereby, for example, lowest locations 75e on each side of wordline-intervening structure 55e are at the same elevation relative one another. Any other attribute(s) or aspect(s) as shown and/or described herein with respect to other embodiments may be used.Figs. 38, 39 and Figs. 40, 41 show analogous alternate embodiment constructions lOf and l Og, respectively. Like numerals from the above- described embodiments have been used where appropriate, with some construction differences being indicated with the suffix“f” and“g”, respectively. Figs. 38 and 39 show an example embodiment wherein slight mask misalignment to the right has occurred, and with lowest locations 75f of structure 55f being within uppermost insulating tier 13 and at different elevations relative one another. Figs. 40 and 41 show an alternate example embodiment wherein perfect mask alignment has occurred, with lowest locations 75g of structure 55g being within uppermost insulating tier 13 and at the same elevation relative one another. Any other attribute(s) or aspect(s) as shown and/or described herein with respect to otherembodiments may be used.Embodiments of the invention encompass memory arrays independent of method of manufacture. Nevertheless, such memory arrays may have any of the attributes as described herein in method embodiments. Likewise, the above-described method embodiments may incorporate and form any of the attributes described with respect to device embodiments. Memory array embodiments may result from artifact(s) of manufacture and, regardless, may or may not have a change (e.g., improvement) in operation compared to predecessor construction(s) that is/are not in accordance with theinvention(s).An embodiment of the invention comprises a memory array (e.g. , 12) comprising a vertical stack (e.g., 18) comprising an uppermost insulating tier (e.g. , 13), an uppermost conductor tier (e.g. , 15) below the insulating
tier, and alternating insulative tiers (e.g., 20) and wordline tiers (e.g. , 22) below the uppermost conductor tier. The wordline tiers comprise gate regions (e.g., 52) of individual memory cells (e.g. , 56). The gate regions individually comprise part of a wordline (e.g. , 29) in individual of the wordline tiers. Channel-material strings (e.g., 53) extend elevationally through the insulative tiers and the wordline tiers. The individual memory cells comprise a memory structure (e.g. , 65) laterally between individual of the gate regions and channel material (e.g. , 36) of the channel-material strings. A wordline-intervening structure (e.g., 55, 55a, 55b, 55c) extends through the stack between immediately-adjacent wordlines. A step (e.g. , 59) is atop or above an uppermost of the insulative tiers on at least one side of the individual wordlines. The wordline-intervening structure is atop the step. Any other attribute(s) or aspect(s) as shown and/or described herein with respect to other embodiments may be used.In some embodiments, a memory array (e.g., 12) comprises a vertical stack (e.g. , 18) comprising an uppermost insulating tier (e.g., 13), an uppermost conductor tier (e.g. , 15) below the insulating tier, and alternating insulative tiers (e.g. , 20) and wordline tiers (e.g., 22) below the uppermost conductor tier. The wordline tiers comprise gate regions (e.g., 52) of individual memory cells (e.g., 56) . The gate regions individually comprise part of a wordline (e.g., 29) in individual of the wordline tiers. Channel- material strings (e.g., 53) extend elevationally through the insulative tiers and the wordline tiers. The individual memory cells comprise a memory structure (e.g. , 65) laterally between individual of the gate regions and channel material (e.g. , 36) of the channel-material strings. A wordline- intervening structure (e.g. , 55d, 55e, 55f, 55g, 55h)) extends through the stack between immediately-adjacent wordlines. The wordline-intervening structure comprises opposing laterally-outer longitudinal edges (e.g. , 70) .At least some of each of the opposing laterally-outer longitudinal edges above the uppermost conductor tier is less overall steep than the opposing laterally-outer longitudinal edges below the uppermost conductor tier. Any other attribute(s) or aspect(s) as shown and/or described herein with respect to other embodiments may be used.The above processing(s) or construction(s) may be considered as being relative to an array of components formed as or within a single stack
or single deck of such components above or as part of an underlying base substrate ( albeit , the single stack/deck may have multiple tiers) . Control and/or other peripheral circuitry for operating or accessing such components within an array may also be formed anywhere as part of the finished construction, and in some embodiments may be under the array (e.g.,CMOS-under-array) . Regardless, one or more additional suchstack(s)/deck(s) may be provided or fabricated above and/or below that shown in the figures or described above. Further, the array(s) ofcomponents may be the same or different relative one another in different stacks/decks. Intervening structure may be provided betweenimmediately-vertically-adj acent stacks/decks (e.g., additional circuitry and/or dielectric layers). Also, different stacks/decks may be electrically coupled relative one another. The multiple stacks/decks may be fabricated separately and sequentially (e.g. , one atop another), or two or more stacks/decks may be fabricated at essentially the same time.The assemblies and structures discussed above may be used in integrated circuits/circuitry and may be incorporated into electronic systems. Such electronic systems may be used in, for example, memory modules, device drivers, power modules, communication modems, processor modules, and application-specific modules, and may include multilayer, multichip modules. The electronic systems may be any of a broad range of systems, such as, for example, cameras, wireless devices, displays, chip sets, set top boxes, games, lighting, vehicles, clocks, televisions, cell phones, personal computers, automobiles, industrial control systems, aircraft, etc.In this document unless otherwise indicated,“elevational”,“higher”, “upper”,“lower”,“top”,“atop”,“bottom”,“above”,“below”,“under”, “beneath”,“up”, and“down” are generally with reference to the vertical direction. “Horizontal” refers to a general direction (i.e., within 10 degrees) along a primary substrate surface and may be relative to which the substrate is processed during fabrication, and vertical is a direction generally orthogonal thereto. Reference to“exactly horizontal” is the direction along the primary substrate surface (i.e. , no degrees there-from) and may be relative to which the substrate is processed during fabrication. Further, “vertical” and“horizontal” as used herein are generally perpendicular directions relative one another and independent of orientation of the
substrate in three-dimensional space. Additionally,“elevationally- extending” and“extend(ing) elevationally” refer to a direction that is angled away by at least 45° from exactly horizontal. Further,“extend(ing) elevationally”,“elevationally-extending”,“extend(ing) horizontally”, “horizontally-extending” and the like with respect to a field effect transistor are with reference to orientation of the transistor’s channel length along which current flows in operation between the source/drain regions. For bipolar junction transistors,“extend(ing) elevationally”“elevationally- extending”,“extend(ing) horizontally”,“horizontally-extending” and the like, are with reference to orientation of the base length along which current flows in operation between the emitter and collector. In some embodiments, any component, feature, and/or region that extends elevationally extends vertically or within 10° of vertical.Further,“directly above”,“directly below”, and“directly under” require at least some lateral overlap (i.e. , horizontally) of two stated regions/materials/components relative one another. Also, use of “above” not preceded by“directly” only requires that some portion of the stated region/material/component that is above the other be elevationally outward of the other (i.e. , independent of whether there is any lateral overlap of the two stated regions/materials/components) . Analogously, use of “below” and “under” not preceded by“directly” only requires that some portion of the stated region/material/component that is below/under the other beelevationally inward of the other (i.e. , independent of whether there is any lateral overlap of the two stated regions/materials/components).Any of the materials, regions, and structures described herein may be homogenous or non-homogenous, and regardless may be continuous or discontinuous over any material which such overlie. Where one or more example composition(s) is/are provided for any material, that material may comprise, consist essentially of, or consist of such one or morecomposition(s) . Further, unless otherwise stated, each material may be formed using any suitable existing or future-developed technique, with atomic layer deposition, chemical vapor deposition, physical vapor deposition, epitaxial growth, diffusion doping, and ion implanting being examples.
Additionally,“thickness” by itself (no preceding directional adjective) is defined as the mean straight-line distance through a given material or region perpendicularly from a closest surface of an immediately-adj acent material of different composition or of an immediately-adjacent region.Additionally, the various materials or regions described herein may be of substantially constant thickness or of variable thicknesses. If of variable thickness, thickness refers to average thickness unless otherwise indicated, and such material or region will have some minimum thickness and some maximum thickness due to the thickness being variable. As used herein, “different composition” only requires those portions of two stated materials or regions that may be directly against one another to be chemically and/or physically different, for example if such materials or regions are not homogenous. If the two stated materials or regions are not directly against one another,“different composition” only requires that those portions of the two stated materials or regions that are closest to one another be chemically and/or physically different if such materials or regions are not homogenous. In this document, a material, region, or structure is“directly against” another when there is at least some physical touching contact of the stated materials, regions, or structures relative one another. In contrast,“over”, “on”,“adjacent”,“along”, and“against” not preceded by“directly” encompass“directly against” as well as construction where intervening material(s), region(s), or structure(s) result(s) in no physical touching contact of the stated materials, regions, or structures relative one another.Herein, regions-materials-components are“electrically coupled” relative one another if in normal operation electric current is capable of continuously flowing from one to the other and does so predominately by movement of subatomic positive and/or negative charges when such are sufficiently generated. Another electronic component may be between and electrically coupled to the regions-materials-components. In contrast, when regions-materials-components are referred to as being "directly electrically coupled”, no intervening electronic component (e.g. , no diode, transistor, resistor, transducer, switch, fuse, etc.) is between the directly electrically coupled regions-materials-components.The composition of any of the conductive/conductor/conducting materials herein may be metal material and/or conductively-doped
semiconductive/semiconductor/semiconducting material. “Metal material” is any one or combination of an elemental metal, any mixture or alloy of two or more elemental metals, and any one or more conductive metalcompound(s) .Herein,“selective” as to etch, etching, removing, removal, depositing, forming, and/or formation is such an act of one stated material relative to another stated material(s) so acted upon at a rate of at least 2: 1 by volume. Further, selectively depositing, selectively growing, or selectively forming is depositing, growing, or forming one material relative to another stated material or materials at a rate of at least 2: 1 by volume for at least the first 75 Angstroms of depositing, growing, or forming.Unless otherwise indicated, use of “or” herein encompasses either and both.CONCLUSIONIn some embodiments, a method used in forming a memory array and conductive through-array-vias (TAVs) comprises forming a stack comprising vertically-alternating insulative tiers and wordline tiers. A mask is formed comprising horizontally-elongated trench openings and operative TAV openings above the stack. Etching is conducted of unmasked portions of the stack through the trench and operative TAV openings in the mask to form horizontally-elongated trench openings in the stack and to form operative TAV openings in the stack. Conductive material is formed in the operative TAV openings in the stack to form individual operative TAVs in individual of the operative TAV openings in the stack. A wordline-intervening structure is formed in individual of the trench openings in the stack.In some embodiments, a method used in forming a memory array and conductive through-array-vias (TAVs) comprises forming a stack comprising an uppermost conductor tier and vertically-alternating insulative tiers and wordline tiers. The uppermost conductor tier and wordline tiers comprise a first material and the insulative tiers comprise a second material of different composition from that of the first material. Channel-material strings are formed through the insulative tiers and the wordline tiers. A mask is formed comprising horizontally-elongated trench openings and operative TAV openings above the stack. Etching is conducted of unmasked portions of the
stack through the trench and operative TAV openings in the mask to form horizontally-elongated trench openings in the stack and to form operative TAV openings in the stack. Conductive material is formed in the operative TAV openings in the stack to form individual operative TAVs in individual of the operative TAV openings in the stack. The first material is removed after forming the conductive material in the operative TAV openings in the stack to form wordline-tier voids and an uppermost-conductor-tier void. Conducting material is formed in the wordline-tier voids to comprise the individual wordlines and in the uppermost-conductor-tier void. After forming the conducting material, a wordline-intervening structure is formed in individual of the trench openings in the stack.In some embodiments, a memory array comprises a vertical stack comprising an uppermost insulating tier, an uppermost conductor tier below the insulating tier, and alternating insulative tiers and wordline tiers below the uppermost conductor tier. The wordline tiers comprise gate regions of individual memory cells and the gate regions individually comprise part of a wordline in individual of the wordline tiers. Channel-material strings extend elevationally through the insulative tiers and the wordline tiers. The individual memory cells comprise a memory structure laterally between individual of the gate regions and channel material of the channel-material strings. A wordline-intervening structure extends through the stack between immediately-adj acent of the wordlines. A step is atop or above anuppermost of the insulative tiers of the alternating insulative tiers and wordline tiers on at least one side of individual of the wordlines. The wordline-intervening structure is atop the step.In some embodiments, a memory array comprises a vertical stack comprising an uppermost insulating tier, an uppermost conductor tier below the insulating tier, and alternating insulative tiers and wordline tiers below the uppermost conductor tier. The wordline tiers comprise gate regions of individual memory cells and the gate regions individually comprise part of a wordline in individual of the wordline tiers. Channel-material strings extend elevationally through the insulative tiers and the wordline tiers. The individual memory cells comprise a memory structure laterally between individual of the gate region and channel material of the channel-material strings. A wordline-intervening structure extends through the stack between
immediately-adj acent of the wordlines. The wordline-intervening structure comprises opposing laterally-outer longitudinal edges and at least some of each of the opposing laterally-outer longitudinal edges is above the uppermost conductor tier and is less overall steep than the opposing laterally-outer longitudinal edges below the uppermost conductor tier. |
Processes for forming interconnection layers having tight pitch interconnect structures within a dielectric layer, wherein trenches and vias used to formed interconnect structures have relatively low aspect ratios prior to metallization. The low aspect ratios may reduce or substantially eliminated the potential of voids forming within the metallization material when it is deposited. Embodiments herein may achieve such relatively low aspect ratios through processes that allows for the removal of structures, which are utilized to form the trenches and the vias, prior to metallization. |
CLAIMSWhat is claimed is:1. A method of forming a microelectronic structure, comprising:forming a dielectric layer on a substrate;forming a hardmask layer on the dielectric layer;forming a plurality of backbone structures on a hardmask layer;forming side spacers adjacent sides of each of the plurality of backbone structures;etching a portion of the first hardmask and a portion of the dielectric layer between adjacent side spacers between at least two adjacent backbone structures to form at least one first trench; depositing a sacrificial material in the at least one first trench;removing at least one backbone structure and etching a portion of the hardmask layer and the dielectric layer which resided below the at least one backbone structure to form at least one second trench;depositing a fill material in the at least one second trench;removing the side spacers;removing the sacrificial material from the at least one first trench;removing the fill material from the at least one second trench; anddepositing a conductive material in the at least one first trench and the at least one second trench.2. The method of claim I, wherein forming the plurality of backbone structures comprises: depositing a backbone material on the first hardmask;patterning spacers adjacent the backbone material; andetching the backbone material to transfer the pattern of the spacers into the backbone material.3. The method of claim 2, wherein patterning spacers adjacent the backbone material comprises:patterning sacrificial hardmask structures adjacent the backbone material;depositing a conformal spacer material layer over the plurality of backbone structures;anisotropically etching the conformal spacer material layer; andremoving the sacrificial hardmask structures.4. The method of claim I, wherein forming side spacers adjacent sides of each of the plurality of backbone structures comprises: depositing a conformal side spacer material layer over the plurality of backbone structures; and anisotropically etching the conformal side spacer material layer.5. The method of claim 1, wherein removing the side spacers comprises polishing away the side spacers.6. The method of claim 1, wherein depositing the sacrificial material in the at least one second trench comprises depositing a material selected from the group consisting of titanium nitride, titanium oxide, ruthenium, and cobalt.7. The method of claim 1, wherein depositing the fill material in the at least one second trench comprises depositing a carbon hardmask in the at least one second trench.8. The method of claim 1, wherein forming the dielectric layer on the substrate comprises forming a low k dielectric layer.9. The method of claim 1, wherein forming the plurality of backbone structures on the hardmask layer comprises forming the plurality of backbone structures from a material selected from the group consisting of polysilicon, amorphous silicon, amorphous carbon, silicon carbide, silicon nitride and germanium.10. The method of claim 1, wherein depositing the conductive material in the at least one first trench and the at least one second trench comprises depositing a metal. 1 1. A method of forming a microelectronic structure, comprising:forming a dielectric layer on a substrate, wherein the substrate includes a first contact structure and a second contact structure;forming a hardmask layer on the dielectric layer;forming a plurality of backbone structures on a hardmask layer;forming side spacers adjacent sides of each of the plurality of backbone structures;etching a portion of the first hardmask and a portion of the dielectric layer between adjacent side spacers between at least two adjacent backbone structures to form at least one first trench; forming a first via extending from the at least one first trench to the substrate first contactstructure; depositing a sacrificial material in the at least one first trench;removing at least one backbone structure and etching a portion of the hardmask layer and the dielectric layer which resided below the at least one backbone structure to form at least one second trench;forming a second via extending from the at least one second trench to the substrate second contact structure;depositing a fill material in the at least one second trench;removing the side spacers;removing the sacrificial material from the at least one first trench;removing the fill material from the at least one second trench; anddepositing a conductive material in the at least one first trench, the first via, the at least one second trench, and the second via.12. The method of claim 1 1, wherein forming the plurality of backbone structures comprises: depositing a backbone material on the first hardmask;patterning spacers adjacent the backbone material; andetching the backbone material to transfer the pattern of the spacers into the backbone material.13. The method of claim 12, wherein patterning spacers adjacent the backbone material comprises:patterning sacrificial hardmask structures adjacent the backbone material;depositing a conformal spacer material layer over the plurality of backbone structures;anisotropically etching the conformal spacer material layer; andremoving the sacrificial hardmask structures.14. The method of claim 1 1, wherein forming side spacers adjacent sides of each of the plurality of backbone structures comprises:depositing a conformal side spacer material layer over the plurality of backbone structures; and anisotropically etching the conformal side spacer material layer.15. The method of claim 11, wherein removing the side spacers comprises polishing away the side spacers.16. The method of claim 1 1, wherein depositing the sacrificial material in the at least one second trench comprises depositing a material selected from the group consisting of titanium nitride, titanium oxide, ruthenium, and cobalt. 17. The method of claim 1 1, wherein depositing the fill material in the at least one second trench comprises depositing a carbon hardmask in the at least one second trench.18. The method of claim 1 1, wherein forming the dielectric layer on the substrate comprises forming a low k dielectric layer.19. The method of claim 1 1, wherein forming the plurality of backbone structures on the hardmask layer comprises forming the plurality of backbone structures from a material selected from the group consisting of polysilicon, amorphous silicon, amorphous carbon, silicon carbide, silicon nitride and germanium.20. The method of claim 1 1, wherein depositing the conductive material in the at least one first trench and the at least one second trench comprises depositing a metal.21. A method of forming a microelectronic structure, comprising:forming a dielectric layer on a substrate, wherein the substrate includes a first contact structure and a second contact structure;forming a hardmask layer on the dielectric layer;forming a plurality of backbone structures on a hardmask layer;forming side spacers adjacent sides of each of the plurality of backbone structures;etching a portion of the first hardmask and a portion of the dielectric layer between adjacent side spacers between at least two adjacent backbone structures to form at least one first trench; forming a first via extending from the at least one first trench to the substrate first contactstructure;depositing a via hardmask material into the first via;depositing a sacrificial material in the at least one first trench;removing at least one backbone structure and etching a portion of the hardmask layer and the dielectric layer which resided below the at least one backbone structure to form at least one second trench; forming a second via extending from the at least one second trench to the substrate second contact structure;depositing a fill material in the at least one second trench;removing the side spacers;removing the sacrificial material from the at least one first trench;removing the via hardmask material from the first via;removing the fill material from the at least one second trench; anddepositing a conductive material in the at least one first trench, the first via, the at least one second trench, and the second via.22. The method of claim 21 , wherein removing the via hardmask material from the first via and removing the fill material from the at least one second trench comprises simultaneously removing the via hardmask material from the first via and removing the fill material from the at least one second trench. |
METHODS FOR FORMING INTERCONNECT LAYERS HAVING TIGHT PITCHINTERCONNECT STRUCTURESTECHNICAL FIELDEmbodiments of the present description generally relate to the field of microelectronic device fabrication, and, more particularly, to forming interconnection layers having tight pitch interconnect structures within a dielectric layer. Trenches and vias, which are used to form the interconnect structures, are fabricated to have relatively low aspect ratios prior to metallization, wherein the low aspect ratios reduced or substantially eliminated the potential of voids forming within the metallization material when it is deposited.BACKGROUNDThe microelectronic industry is continually striving to produce ever faster and smaller microelectronic devices for use in various mobile electronic products, such as portable computers, electronic tablets, cellular phones, digital cameras, and the like. As these goals are achieved, the fabrication of the microelectronic devices becomes more challenging. One such challenging area relates to the interconnect layers that are used to connect the individual devices on a microelectronic chip and/or to send and/or receive signals external to the individual device(s). Interconnect layers generally comprise a dielectric material having conductive interconnects (lines), such as copper and copper alloy, coupled to the individual devices. The interconnects (lines) generally comprise a metal line portion and a metal via portion, wherein the metal line portion is formed in a trench within the dielectric material and the metal via portion is formed within a via opening that extends from the trench through the dielectric material. It is understood that a plurality of interconnection layers (e.g., five or six levels) may be formed to effectuate the desired electrical connections.As these interconnects are manufactured at smaller pitches (e.g. narrower and/or closer together), it becomes more and more difficult to properly align the trenches and the vias within and between the desired interconnect layer. In particular, during manufacturing, the location of the via edges with respect to the interconnect layer or line it is to contact will have variation (e.g. be misaligned) due to natural manufacturing variation. A via, however, must allow for connection of one interconnect layer to the desired underlying interconnect layer or line without erroneously connecting to a different interconnect layer or line. If the via is misaligned and contacts the wrong metal feature (e.g. misses line below and/or connects two lines), the microelectronic chip may short circuit resulting in degraded electrical performance. One solution to address this issue is to reduce the trench and the via size (e.g. making the via narrower). However, reducing the trench and the via size means that the aspect ratio of the openings of the trench and the via may be high. As will be understood to those skilled in the art, high aspect ratio may result in a potential reduced yield due to voiding during the deposition of conductive material (metallization) used to form the interconnects.BRIEF DESCRIPTION OF THE DRAWINGSThe subject matter of the present disclosure is particularly pointed out and distinctly claimed in the concluding portion of the specification. The foregoing and other features of the present disclosure will become more fully apparent from the following description and appended claims, taken in conjunction with the accompanying drawings. It is understood that the accompanying drawings depict only several embodiments in accordance with the present disclosure and are, therefore, not to be considered limiting of its scope. The disclosure will be described with additional specificity and detail through use of the accompanying drawings, such that the advantages of the present disclosure can be more readily ascertained, in which:FIGs. 1-28 illustrate cross-sectional views of a method of forming an interconnection layer, according to an embodiment of the present description.FIG. 29 is a flow chart of a process of fabricating an interconnection layer, according to an embodiment of the present description.FIG. 30 illustrates a computing device in accordance with one implementation of the present description.DESCRIPTION OF EMBODIMENTSIn the following detailed description, reference is made to the accompanying drawings that show, by way of illustration, specific embodiments in which the claimed subject matter may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the subject matter. It is to be understood that the various embodiments, although different, are not necessarily mutually exclusive. For example, a particular feature, structure, or characteristic described herein, in connection with one embodiment, may be implemented within other embodiments without departing from the spirit and scope of the claimed subject matter. References within this specification to "one embodiment" or "an embodiment" mean that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one implementation encompassed within the present description. Therefore, the use of the phrase "one embodiment" or "in an embodiment" does not necessarily refer to the same embodiment. In addition, it is to be understood that the location or arrangement of individual elements within each disclosed embodiment may be modified without departing from the spirit and scope of the claimed subject matter. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the subject matter is defined only by the appended claims, appropriately interpreted, along with the full range of equivalents to which the appended claims are entitled. In the drawings, like numerals refer to the same or similar elements or functionality throughout the several views, and that elements depicted therein are not necessarily to scale with one another, rather individual elements may be enlarged or reduced in order to more easily comprehend the elements in the context of the present description.The terms "over", "to", "between" and "on" as used herein may refer to a relative position of one layer with respect to other layers. One layer "over" or "on" another layer or bonded "to" another layer may be directly in contact with the other layer or may have one or more intervening layers. One layer "between" layers may be directly in contact with the layers or may have one or more intervening layers.Embodiments of the present description include forming interconnection layers having tight pitch interconnect structures within a dielectric layer, wherein trenches and vias used to form the interconnect structures have relatively low aspect ratios prior to metallization. The low aspect ratios may reduce or substantially eliminated the potential of voids forming within the metallization material when it is deposited. Embodiments of the present description may achieve such relatively low aspect ratios through processes that allows for the removal of structures, which are utilized to form the trenches and the vias, prior to metallization.FIG. 1 illustrates a stacked layer for backbone patterning. The stacked layer 100 may comprise a dielectric layer 104 formed on a substrate 102, a first hardmask layer 106 formed on the dielectric layer 104, a backbone material 108 formed on the first hardmask layer 106, a second hardmask layer 112 formed on the backbone material 108, a sacrificial hardmask layer 1 14 formed on the second hardmask layer 112, an first antireflective coating 1 16 formed on the sacrificial hardmask layer 1 14, and a first photoresist material 118 patterned on the first antireflective coating 1 16. The components of the stacked layer 100 may be deposited by any known techniques, which, for the purpose of clarity and conciseness, will not be discussed herein.The substrate 102 may be a microelectronic chip, a wafer substrate (e.g., a portion of a silicon wafer), or the like, having circuit devices (not shown), including transistors or the like, wherein contact structures (illustrated as first contact structure 120A and second contact structure 120B) may be in electrical communication with the circuit devices. Furthermore, the substrate 102 may be an interconnection layer, wherein the contact structures 120A, 120B may be interconnects, as will be discussed.In one embodiment, the dielectric layer 104 may be a material having, for example, a dielectric constant (k) less than the dielectric constant of silicon dioxide (S1O2) (e.g., a "low k" dielectric material). Representative low k dielectric materials include materials containing silicon, carbon, and/or oxygen which may be referred to as polymers and that are known in the art. In one embodiment, the dielectric layer 104 may be porous.In one embodiment, the first hardmask layer 106, the second hardmask layer 1 12, and the sacrificial hardmask layer 1 14 may be dielectric materials. Representative dielectric materials may include, but are not limited to, various oxides, nitrides, and carbides, for example, silicon oxide, titanium oxide, hafnium oxide, aluminum oxide, oxynitride, zirconium oxide, hafnium silicate, lanthanum oxide, silicon nitride, boron nitride, amorphous carbon, silicon carbide, aluminum nitride, and other similar dielectric materials. In one embodiment, first hardmask layer 106 is deposited, for example, by a plasma deposition process, to a thickness to serve as a mask to underlying dielectric layer 104 (e.g., to protect from undesired modification of the dielectric material from energy used in subsequent process steps). In one embodiment, a representative thickness is a thickness that will not significantly affect an overall dielectric constant of the combined dielectric layer 104 and first hardmask layer 106, but at most will marginally affect such overall dielectric constant. In one embodiment, a representative thickness is on the order of 30 angstroms (A) ± 20 A. In another embodiment, a representative thickness is on the order of two to five nanometers (nm).The backbone material 108 may include, but is not limited to, polysilicon, amorphous silicon, amorphous carbon, silicon nitride, silicon carbide, and germanium.As shown in FIG. 2, the stacked layer 100 of FIG. 1 may be etched wherein the second hardmask layer 1 12 acts as an etch stop. The etching results in the first photoresist material 1 18 pattern being transferred into the sacrificial hardmask layer 1 14. As shown in FIG. 2, the first photoresist material 118 and the first antireflective coating 116 may be removed, resulting in patterned sacrificial hardmask structures 122.As shown in FIG. 3, a conformal spacer material layer 124 may be deposited over the structure shown in FIG. 2. The conformal spacer material layer 124 may be deposited by any conformal deposition techniques known in the art, and may comprise any appropriate material, including, but not limited to, silicon dioxide, silicon nitride, silicon carbide, and amorphous silicon. As shown in FIG. 4, the conformal spacer material layer 124 may be anisotropically etched and the sacrificial hardmask structures 122 may be removed to form first spacers 126.As shown in FIG. 5, the structure of FIG. 4 may be etched wherein the first hardmask layer 106 acts as an etch stop. The etching results in the pattern of the first spacers 126 being transferred into the backbone material 108, resulting in patterned backbone structures 128 capped with a portion of the second hardmask layer 1 12. In one embodiment, the second hardmask layer 1 12 may remain to protect the backbone material 108 during subsequent processing, such as the formation of trenches and vias, as will be discussed. In another embodiment, the second hardmask layer 1 12 may be removed.As shown in FIG. 6, a conformal side spacer material layer 132 may be deposited over the structure shown in FIG. 5. The conformal side spacer material layer 132 may be deposited by any conformal deposition technique known in the art and may comprise any appropriate material, including, but not limited to, silicon dioxide, silicon nitride, titanium oxide, hafnium oxide, zirconium oxide, aluminum nitride,and amorphous silicon.As shown in FIG. 7, a third hardmask 134 may be deposited over the conformal side spacer material layer 132, a second antireflective coating 136 may be deposited over the third hardmask 134, and a second photoresist material 138 may be patterned on the second antireflective coating 136. As shown in FIG. 8, the structure of FIG. 7 may be etched to remove a portion of the third hardmask 134 and a portion of the second antireflective coating 136 not protected by the patterned second photoresist material 138 (see FIG. 7), wherein the conformal side spacer material layer 132 acts as an etch stop. As shown in FIG. 9, the structure of FIG. 8 may be anisotropically etched through the conformal side spacer material layer 132 between adjacent patterned backbone structures 128, through a portion of the first hardmask layer 106, and into the dielectric layer 104, thereby forming at least one first trench 142 within the dielectric layer 104, wherein portions of the conformal side spacer material layer 132 may be protected from etching by the patterned third hardmask 134. It is understood that the trench 142 may extend perpendicularly from the plane of FIG. 9. The etching of the conformal side spacer material layer 132 may result in the formation of side spacers 144 along sides 146 of the patterned backbone structures 128.As shown in FIG. 10, the third hardmask 134, the second antireflective coating 136, and the second photoresist material 138 may be removed and a fourth hardmask 152 may be deposited, a third antireflective coating 154 may be deposited over the fourth hardmask 152, and a third photoresist material 156 may be patterned on the third antireflective coating 154 to have at least one opening 158 therein aligned with a respective first trench 142. As shown in FIGs. 1 1 and 12, a portion of the fourth hardmask 152 may be etched through the opening 158 and a further portion of the dielectric material 104 may be etched to form a first via 160 extending from the first trench 142 to a respective first contact structure 120A.As shown in FIG. 13, the fourth hardmask 152, the third antireflective coating 154, and the third photoresist material 156 may be removed and a via hardmask 162 may be deposited. In one embodiment, the via hardmask 162 may be selected from materials that may be selectively removable in presence of material used for the dielectric material 104 and the firsthardmask layer 106 and any underlying metals, such as the material used for the contact structures 120A, 120B. In an embodiment, the via hardmask 162 may be a carbon hardmask, such as an amorphous carbon material, as will be understood to those skilled in the art. In another embodiment, the via hardmask 162 may be metal or metal nitrides, such as titanium nitride, cobalt, ruthenium, or combination thereof, that are selectively removable to underlaying metals.As shown in FIG. 14, the via hardmask 162 may be etched back to remove a portion of the via hardmask 162 from the first trenches 142 while leaving a portion of the via hardmask 162 within the first via 160. It is understood that a portion of the via hardmask 162 may remain in the first trenches 142.As shown in FIG. 15, a sacrificial material 164 may be deposited over the structure of FIG. 14, wherein the sacrificial material 164 is disposed within the first trenches 142. In one embodiment, the sacrificial material 164 may be selected from materials that can mechanically and chemically sustain further processing steps and that is selectively removable in presence of material used for the dielectric layer 104 and any underlying metals, such as the material used for the contact structures 120A, 120B. In an embodiment, the sacrificial material 164 may include, but is not limited to, titanium oxide, titanium nitride, ruthenium, and cobalt.As shown in FIG. 16, the structure of FIG. 15 may be polished, such as by chemical mechanical planarization, to remove a portion of the sacrificial material 164 and the second hardmask layer 1 12 (if present) and to expose the backbone structures 128.As shown in FIG. 17, a fifth hardmask 166, such as a carbon hardmask, may be deposited over the structure of FIG. 16, a fourth antireflective coating 168 may be deposited over the fifth hardmask 166, and a fourth photoresist material 172 may be patterned on the fourth antireflective coating 168 to have at least one opening 174 therein. As shown in FIG. 18, the fifth hardmask 166 may etched to expose a desired portion of the structure of FIG. 17. As shown in FIG. 19, the backbone structures 128 (see FIG. 18) may be etched away, wherein the etching continues through a portion of the first hardmask layer 106 exposed by the removal of the backbone structures 128 and into the dielectric layer 104, thereby forming at least one second trench 176 within the dielectric layer 104. In one embodiment, the structure of FIG. 19 may be exposed to a plasma of reactive gases (e.g., fluorocarbons, oxygen, chlorine and/or boron trichloride) which are capable of etching backbone structures 128, first hardmask layer 106, and dielectric layer 104 to a desired depth without etching side spacers 144 and sacrificial material 164.As shown in FIG. 20, the remaining fifth hardmask 166 and fourth antireflective coating 168 may be removed and a sixth hardmask 178, such as a carbon hardmask, may be deposited over the structure of FIG. 19 filling the second trenches 176, a fifth antireflective coating 182 may be deposited over the sixth hardmask 178, and a fifth photoresist material 184 may be patterned on the fifth antireflective coating 182 to have at least one opening 186 therein aligned with a respective second trench 176 (see FIG. 19). As shown in FIG. 21, a portion of the sixth hardmask 178 may be etched through the opening 186 and a further portion of the dielectric material 104 may be etched to form a second via 188 extending from the second trench 176 to a respective second contact structure 120B.As shown in FIG. 22, the sixth hardmask 178, the fifth antireflective coating 182, and the fifth photoresist material 184 (see FIG. 20) may be removed and replaced with a fill material 192 which extends into the second trenches 176 and second vias 188 (see FIG. 21). In one embodiment, the fill material 192 may comprise a carbon hardmask, such as an amorphous carbon material. As shown in FIG. 23, the fill material 192 may be optionally etched back to expose the side spacers 144 while leaving a portion of the eighth hardmask 192 within the second trenches 176 and second vias 188 (see FIG. 21).As shown in FIG. 24, the structure of FIG. 23 may be polished, such as by chemical mechanical polishing, to expose the first hardmask layer 106. The sacrificial material 164 may then be selectively removed from the first trenches 142, as shown in FIG. 25. As shown in FIG. 26, the eighth hardmask 192 may be selectively removed from the second trenches 176 and the second vias 188, and the via hardmask 162 may be removed from the first vias 160. In one embodiment, where the eighth hardmask 192 and the via hardmask 162 are carbon hardmasks, as previously discussed, they may be removed with a single ashing and cleaning process, as known in the art.As shown in FIG. 27, a conductive material 194 may be deposited over the structure ofFIG. 26 to fill the first trenches 142, the first vias 160, the second trenches 176, and the second vias 188. The conductive material 194 may be made of any appropriate conductive material, such as metals including, but not limited to copper, aluminum, tungsten, cobalt, ruthenium, and the like, with or without a liner material, such as tantalum, tantalum nitride, or titanium nitride. It is understood that the first hardmask layer 106 may be removed prior to the deposition of the conductive material 194.As shown in FIG. 28, the structure of FIG. 27 may be polished to remove a portion of conductive material 194 and the first hardmask layer 106 (if present) exposing the dielectric material 104, thereby forming interconnects 196. The interconnects 196 may be, for example, wiring lines that are used to provide connections to and between devices connected to other interconnect layers or lines. The interconnects 196 may have a similar size and dimensions, and may further be parallel to one another. In addition, a pitch P (see FIG. 23) of the interconnects 196 may be relatively small such that they are considered to have a tight pitch, such as an interconnect pitch P of less than about 80nm.Referring back to FIG. 23, prior to the polishing to removing the side spacers 144 and other structures above the first hardmask layer 106 as shown in FIG. 24, the aspect ratio (i.e. height to width) of the trenches (see first trenches 142 and second trenches 176 in FIG. 26) can be greater than about 8: 1 (e.g. H1 :W) and the aspect ratio of the vias (see first vias 160 and second vias 188 of FIG. 26) can be greater than about 10: 1 (e.g. H2:W) for trenches having a pitch P of less than about 40 nm. As illustrated in FIG. 24, after polishing, the aspect ratio (i.e. height to width) of the trenches (e.g. H1 ' :W) and the vias (e.g. H2':W) can be less than about 4: 1. As previously discussed, low aspect ratios can reduce or substantially eliminated the potential of voids forming within the conductive material 194 when it is deposited (see FIG. 27).FIG. 29 is a flow chart of a process 200 of fabricating a microelectronic structure according to an embodiment of the present description. As set forth in block 202, a dielectric layer may be formed on a substrate. A hardmask layer may be formed on the dielectric layer, as set forth in block 204. As set forth in block 206, a plurality of backbone structures may be formed on a hardmask layer. Side spacers may be formed adjacent sides of each of the plurality of backbone structures, as set forth in block 208. As set forth in block 210, a portion of the first hardmask and a portion of the dielectric layer may be etched between adjacent side spacers between at least two adjacent backbone structures to form at least one first trench in the dielectric layer. A sacrificial material may be deposited in the at least one first trench, as set forth in block 212. As set forth in block 214, at least one backbone structure may be removed and a portion of the hardmask layer and of dielectric layer which resided below the backbone structure may be etched to form at least one second trench. A fill material may be deposited in the at least one second trench, as set forth in block 216. As set forth in block 218, the side spacers may be removed. The sacrificial material may be removed from the at least one first trench, as set forth in block 220. As set forth in block 222, the fill material may be removed from the at least one second trench. A conductive material deposited in the at least one first trench and the at least one second trench, as set forth in block 224.FIG. 30 illustrates a computing device 300 in accordance with one implementation of the present description. The computing device 300 houses a board 302. The board 302 may include a number of components, including but not limited to a processor 304 and at least one communication chip 306A, 306B. The processor 304 is physically and electrically coupled to the board 302. In some implementations the at least one communication chip 306A, 306B is also physically and electrically coupled to the board 302. In further implementations, the communication chip 306A, 306B is part of the processor 304.Depending on its applications, the computing device 300 may include other components that may or may not be physically and electrically coupled to the board 302. These other components include, but are not limited to, volatile memory (e.g., DRAM), non-volatile memory (e.g., ROM), flash memory, a graphics processor, a digital signal processor, a crypto processor, a chipset, an antenna, a display, a touchscreen display, a touchscreen controller, a battery, an audio codec, a video codec, a power amplifier, a global positioning system (GPS) device, a compass, an accelerometer, a gyroscope, a speaker, a camera, and a mass storage device (such as hard disk drive, compact disk (CD), digital versatile disk (DVD), and so forth).The communication chip 306A, 306B enables wireless communications for the transfer of data to and from the computing device 300. The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a non- solid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they might not. The communication chip 306 may implement any of a number of wireless standards or protocols, including but not limited to Wi-Fi (IEEE 802.11 family), WiMAX (IEEE 802.16 family), IEEE 802.20, long term evolution (LTE), Ev- DO, HSPA+, HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth, derivatives thereof, as well as any other wireless protocols that are designated as 3G, 4G, 5G, and beyond. The computing device 300 may include a plurality of communication chips 306A, 306B. For instance, a first communication chip 306A may be dedicated to shorter range wireless communications such as Wi-Fi and Bluetooth and a second communication chip 306B may be dedicated to longer range wireless communications such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, Ev-DO, and others.The processor 304 of the computing device 300 includes an integrated circuit die packaged within the processor 304. In some implementations of the present description, the integrated circuit die of the processor may be connected to other devices with one or more interconnection layers that are formed in accordance with implementations described above. The term"processor" may refer to any device or portion of a device that processes electronic data from registers and/or memory to transform that electronic data into other electronic data that may be stored in registers and/or memory.The communication chip 306A, 306B also includes an integrated circuit die packaged within the communication chip 306A, 306B. In accordance with another implementation of the present description, the integrated circuit die of the communication chip may be connected to other devices with one or more interconnection layers that are formed in accordance with implementations described above.In further implementations, another component housed within the computing device 300 may contain an integrated circuit die that includes an interconnect in accordance with embodiments of the present description.In various implementations, the computing device 300 may be a laptop, a netbook, a notebook, an ultrabook, a smartphone, a tablet, a personal digital assistant (PDA), an ultra mobile PC, a mobile phone, a desktop computer, a server, a printer, a scanner, a monitor, a set- top box, an entertainment control unit, a digital camera, a portable music player, or a digital video recorder. In further implementations, the computing device 300 may be any other electronic device that processes data.It is understood that the subject matter of the present description is not necessarily limited to specific applications illustrated in FIGs. 1-30. The subject matter may be applied to other microelectronic devices and assembly applications, as well as any appropriate electronic application, as will be understood to those skilled in the art.The following examples pertain to further embodiments. Specifics in the examples may be used anywhere in one or more embodiments.In Example 1 , a method of forming a microelectronic structure may comprise forming a dielectric layer on a substrate; forming a hardmask layer on the dielectric layer; forming a plurality of backbone structures on a hardmask layer; forming side spacers adjacent sides of each of the plurality of backbone structures; etching a portion of the first hardmask and a portion of the dielectric layer between adjacent side spacers between at least two adjacent backbone structures to form at least one first trench; depositing a sacrificial material in the at least one first trench; removing at least one backbone structure and etching a portion of the hardmask layer and the dielectric layer which resided below the at least one backbone structure to form at least one second trench; depositing a fill material in the at least one second trench; removing the side spacers; removing the sacrificial material from the at least one first trench; removing the fill material from the at least one second trench; and depositing a conductive material in the at least one first trench and the at least one second trench.In Example 2, the subject matter of Example 1 can optionally include forming the plurality of backbone structures comprising depositing a backbone material on the first hardmask;patterning spacers adjacent the backbone material; and etching the backbone material to transfer the pattern of the spacers into the backbone material.In Example 3, the subject matter of Example 2 can optionally include the patterning of the spacers adjacent the backbone material comprising patterning sacrificial hardmask structures adjacent the backbone material; depositing a conformal spacer material layer over the plurality of backbone structures; anisotropically etching the conformal spacer material layer; and removing the sacrificial hardmask structures.In Example 4, the subject matter of any of Examples 1 to 3 can optionally include forming side spacers adjacent sides of each of the plurality of backbone structures comprisingdepositing a conformal side spacer material layer over the plurality of backbone structures; and anisotropically etching the conformal side spacer material layer.In Example 5, the subject matter of any of Examples 1 to 4 can optionally include removing the side spacers comprising polishing away the side spacers.In Example 6, the subject matter of any of Examples 1 to 5 can optionally include depositing the sacrificial material in the at least one second trench comprising depositing a material selected from the group consisting of titanium nitride, ruthenium, and cobalt.In Example 7, the subject matter of any of Examples 1 to 6 can optionally include depositing the fill material in the at least one second trench comprising depositing a carbon hardmask in the at least one second trench.In Example 8, the subject matter of any of Examples 1 to 7 can optionally include forming the dielectric layer on the substrate comprising forming a low k dielectric layer.In Example 9, the subject matter of any of Examples 1 to 8 can optionally include forming the plurality of backbone structures on the hardmask layer comprising forming the plurality of backbone structures from a material selected from the group consisting of polysilicon, amorphous silicon, amorphous carbon, silicon nitride and germanium.In Example 10, the subject matter of any of Examples 1 to 9 can optionally include depositing the conductive material in the at least one first trench and the at least one second trench comprising depositing a metal. In Example 1 1, a method of forming a microelectronic structure may comprise forming a dielectric layer on a substrate, wherein the substrate includes a first contact structure and a second contact structure; forming a hardmask layer on the dielectric layer; forming a plurality of backbone structures on a hardmask layer; forming side spacers adjacent sides of each of the plurality of backbone structures; etching a portion of the first hardmask and a portion of the dielectric layer between adjacent side spacers between at least two adjacent backbone structures to form at least one first trench; forming a first via extending from the at least one first trench to the substrate first contact structure; depositing a sacrificial material in the at least one first trench; removing at least one backbone structure and etching a portion of the hardmask layer and the dielectric layer which resided below the at least one backbone structure to form at least one second trench; forming a second via extending from the at least one second trench to the substrate second contact structure; depositing a fill material in the at least one second trench; removing the side spacers; removing the sacrificial material from the at least one first trench; removing the fill material from the at least one second trench; and depositing a conductive material in the at least one first trench, the first via, the at least one second trench, and the second via.In Example 12, the subject matter of Example 11 can optionally include forming the plurality of backbone structures comprising depositing a backbone material on the first hardmask; patterning spacers adjacent the backbone material; and etching the backbone material to transfer the pattern of the spacers into the backbone material.In Example 13, the subject matter of Example 12 can optionally include patterning the spacers adjacent the backbone material comprising patterning sacrificial hardmask structures adjacent the backbone material; depositing a conformal spacer material layer over the plurality of backbone structures; anisotropically etching the conformal spacer material layer; and removing the sacrificial hardmask structures.In Example 14, the subject matter of any of Examples 1 1 to 13 can optionally include forming side spacers adjacent sides of each of the plurality of backbone structures comprising depositing a conformal side spacer material layer over the plurality of backbone structures; and anisotropically etching the conformal side spacer material layer.In Example 15, the subject matter of any of Examples 1 1 to 14 can optionally include removing the side spacers comprising polishing away the side spacers.In Example 16, the subject matter of any of Examples 1 1 to 15 can optionally include depositing the sacrificial material in the at least one second trench comprising depositing a material selected from the group consisting of titanium nitride, ruthenium, and cobalt. In Example 17, the subject matter of any of Examples 1 1 to 16 can optionally include depositing the fill material in the at least one second trench comprising depositing a carbon hardmask in the at least one second trench.In Example 18, the subject matter of any of Examples 1 1 to 17 can optionally include forming the dielectric layer on the substrate comprising forming a low k dielectric layer.In Example 19, the subject matter of any of Examples 1 1 to 18 can optionally include forming the plurality of backbone structures on the hardmask layer comprising forming the plurality of backbone structures from a material selected from the group consisting of polysilicon, amorphous silicon, amorphous carbon, silicon nitride and germanium.In Example 20, the subject matter of any of Examples 1 1 to 19 can optionally include depositing the conductive material in the at least one first trench and the at least one second trench comprising depositing a metal.In Example 21, a method of forming a microelectronic structure may comprise forming a dielectric layer on a substrate, wherein the substrate includes a first contact structure and a second contact structure; forming a hardmask layer on the dielectric layer; forming a plurality of backbone structures on a hardmask layer; forming side spacers adjacent sides of each of the plurality of backbone structures; etching a portion of the first hardmask and a portion of the dielectric layer between adjacent side spacers between at least two adjacent backbone structures to form at least one first trench; forming a first via extending from the at least one first trench to the substrate first contact structure; depositing a via hardmask material into the first via;depositing a sacrificial material in the at least one first trench; removing at least one backbone structure and etching a portion of the hardmask layer and the dielectric layer which resided below the at least one backbone structure to form at least one second trench; forming a second via extending from the at least one second trench to the substrate second contact structure; depositing a fill material in the at least one second trench; removing the side spacers; removing the sacrificial material from the at least one first trench; removing the via hardmask material from the first via; removing the fill material from the at least one second trench; and depositing a conductive material in the at least one first trench, the first via, the at least one second trench, and the second via.In Example 22, the subject matter of Example 21 can optionally include removing the via hardmask material from the first via and removing the fill material from the at least one second trench comprising simultaneously removing the via hardmask material from the first via and removing the fill material from the at least one second trench. Having thus described in detail embodiments of the present description, it is understood that the present description defined by the appended claims is not to be limited by particular details set forth in the above description, as many apparent variations thereof are possible without departing from the spirit or scope thereof. |
Techniques for performing flow control in Universal Serial Bus (USB) are described. In one design, a USB host sends token packets to a USB device to initiate data exchanges with the USB device. The USB device determines that it is incapable of exchanging data with the USB host, e.g., because there is no data to send or because its buffer is full or near full. The USB device then sends a "flow off" notification to the USB host to suspend data exchanges. The USB host receives the flow off notification and suspends sending token packets to the USB device. Thereafter, the USB device determines that it is capable of exchanging data with the USB host. The USB device then sends a "flow on" notification to the USB host to resume data exchanges. The USB host receives the flow on notification and resumes sending token packets to the USB device. |
CLAIMS 1. An apparatus comprising: a processor configured to determine capability of a Universal Serial Bus (USB) device to exchange data with a USB host, and to send a notification for flow control to the USB host based on the determined capability of the USB device; and a memory coupled to the processor. 2. The apparatus of claim 1, wherein the processor is configured to send the notification for flow control for a particular pipe among a plurality of pipes between the USB device and the USB host. 3. The apparatus of claim 1, wherein the processor is configured to determine that the USB device is incapable of exchanging data with the USB host, and to send a flow off notification to the USB host to suspend data exchanges. 4. The apparatus of claim 3, wherein after sending the flow off notification, the processor is configured to determine that the USB device is capable of exchanging data with the USB host, and to send a flow on notification to the USB host to resume data exchanges. 5. The apparatus of claim 1, wherein the processor is configured to determine that the USB device is incapable of sending data to the USB host, and to send a flow off notification to the USB host, and wherein the USB host suspends sending IN token packets to the USB device in response to the flow off notification. 6. The apparatus of claim 5, wherein the processor is configured to determine that the USB device is incapable of sending data to the USB host when there is no data to send. 7. The apparatus of claim 5, wherein after sending the flow off notification, the processor is configured to determine that the USB device is capable of sending data to the USB host, and to send a flow on notification to the USB host, and wherein the USB host resumes sending IN token packets to the USB device in response to the flow on notification. 8. The apparatus of claim 1, wherein the processor is configured to determine that the USB device is incapable of receiving data from the USB host, and to send a flow off notification to the USB host, and wherein the USB host suspends sending OUT or PING token packets to the USB device in response to the flow off notification. 9. The apparatus of claim 8, wherein the processor is configured to determine that the USB device is incapable of receiving data from the USB host when a buffer at the USB device is full or near full. 10. The apparatus of claim 8, wherein the processor is configured to determine that the USB device is incapable of receiving data from the USB host when a buffer at the USB device is within a predetermined amount of being full, the predetermined amount corresponding to reserved buffer capacity to account for delay by the USB host in suspending the OUT or PING token packets after receiving the flow off notification. 11. The apparatus of claim 8, wherein after sending the flow off notification, the processor is configured to determine that the USB device is capable of receiving data from the USB host, and to send a flow on notification to the USB host, and wherein the USB host resumes sending OUT or PING token packets to the USB device in response to the flow on notification. 12. The apparatus of claim 1, wherein the processor is configured to send the notification for flow control on an interrupt pipe to the USB host. 13. The apparatus of claim 12, wherein the processor is configured to receive an IN token packet for the interrupt pipe from the USB host, and to send the notification for flow control on the interrupt pipe after receiving the IN token packet. 14. A method comprising : determining capability of a Universal Serial Bus (USB) device to exchange data with a USB host; and sending a notification for flow control to the USB host based on the determined capability of the USB device. 15. The method of claim 14, wherein the determining capability of the USB device comprises determining that the USB device is incapable of exchanging data with the USB host, and wherein the sending the notification for flow control comprises sending a flow off notification to the USB host to suspend data exchanges. 16. The method of claim 15 , further comprising : determining that the USB device is capable of exchanging data with the USB host; and sending a flow on notification to the USB host to resume data exchanges. 17. The method of claim 14, wherein the sending the notification for flow control comprises sending the notification for flow control on an interrupt pipe to the USB host. 18. An apparatus comprising: means for determining that a Universal Serial Bus (USB) device is incapable of exchanging data with a USB host; and means for sending a flow off notification to the USB host to suspend data exchanges. 19. The apparatus of claim 18, further comprising: means for determining that the USB device is capable of exchanging data with the USB host; and means for sending a flow on notification to the USB host to resume data exchanges. 20. A processor-readable media for storing instructions to: determine that a Universal Serial Bus (USB) device is incapable of exchanging data with a USB host; and send a flow off notification to the USB host to suspend data exchanges. 21. The processor-readable media of claim 20, and further for storing instructions to: determine that the USB device is capable of exchanging data with the USB host; and send a flow on notification to the USB host to resume data exchanges. 22. An apparatus comprising: a processor configured to send token packets to a Universal Serial Bus (USB) device, to receive a first notification for flow control from the USB device, and to alter sending token packets to the USB device in response to the first notification; and a memory coupled to the processor. 23. The apparatus of claim 1, wherein the token packets and the first notification are for a particular pipe among a plurality of pipes between the USB device and a USB host. 24. The apparatus of claim 22, wherein the processor is configured to suspend sending token packets to the USB device in response to the first notification. 25. The apparatus of claim 22, wherein the processor is configured to send token packets at a slower rate to the USB device in response to the first notification. 26. The apparatus of claim 22, wherein the processor is configured to receive a second notification for flow control from the USB device, and to resume sending token packets to the USB device. 27. The apparatus of claim 22, wherein the processor is configured to send IN token packets to the USB device to request for data from the USB device. 28. The apparatus of claim 22, wherein the processor is configured to send OUT or PING token packets to the USB device to indicate data to send to the USB device. 29. The apparatus of claim 22, wherein the processor is configured to send token packets for a data pipe to the USB device, and to receive the first notification on an interrupt pipe from the USB device. 30. The apparatus of claim 29, wherein the processor is configured to send IN token packets for the interrupt pipe in accordance with a selected bus access period, and to receive the first notification for flow control after one of the IN token packets sent for the interrupt pipe. 31. A method comprising : sending token packets to a Universal Serial Bus (USB) device; receiving a first notification for flow control from the USB device; and suspending sending token packets to the USB device in response to the first notification. 32. The method of claim 31 , further comprising: receiving a second notification for flow control from the USB device; and resuming sending token packets to the USB device. 33. The method of claim 31, wherein the receiving the first notification for flow control comprises receiving the first notification for flow control on an interrupt pipe from the USB device. 34. An apparatus comprising: means for sending token packets to a Universal Serial Bus (USB) device; means for receiving a first notification for flow control from the USB device; and means for suspending sending token packets to the USB device in response to the first notification. 35. The apparatus of claim 34, further comprising: means for receiving a second notification for flow control from the USB device; and means for resuming sending token packets to the USB device. 36. A processor-readable media for storing instructions to: send token packets to a Universal Serial Bus (USB) device; receive a first notification for flow control from the USB device; and suspend sending token packets to the USB device in response to the first notification. 37. The processor-readable media of claim 36, and further for storing instructions to: receive a second notification for flow control from the USB device; and resume sending token packets to the USB device. |
FLOW CONTROL FOR UNIVERSAL SERIAL BUS (USB)The present application claims priority to provisional U.S. Application Serial No. 60/808,691, entitled "Optimized USB Flow Control Mechanism" filed May 25, 2006, assigned to the assignee hereof and incorporated herein by reference.BACKGROUNDI. FieldThe present disclosure relates generally to data communication, and more specifically to techniques for controlling data exchanges via USB.II. BackgroundUSB is a serial bus that is widely used to interconnect computers with external devices such as keyboards, mouse devices, printers, scanners, memory sticks, disk drives, digital cameras, webcams, etc. USB is also commonly used for other electronics devices such as personal digital assistants (PDAs), game machines, etc. [0004] USB utilizes a host-centric architecture for data exchanges between a USB host and USB devices coupled to the USB host. The USB host may reside on a computer, and the USB devices may be external devices coupled to the computer via USB wire. In the host-centric architecture, the USB host controls communication with all USB devices. Whenever a new USB device couples to the computer, the USB host and the USB device exchange signaling to configure the USB device. Thereafter, the USB host may periodically send token packets to the USB device whenever the USB host desires to send data to, or receive data from, the USB device. The USB device may receive data from, or send data to, the USB host whenever token packets are issued by the USB host.The USB host may start a transaction by sending a token packet to the USB device. Upon receiving the token packet, the USB device may send a negative acknowledgement (NAK) handshake packet if the USB device temporarily cannot send or receive data. Upon receiving the NAK from the USB device, the USB host may retry the NAK' ed transaction by sending another token packet at a later time. [0006] NAK handshake packets may be used for flow control in USB. The USB device may send NAK handshake packets to adjust/throttle the data rate and prevent its buffers from under- flowing or over- flowing. However, the NAK' ed transactions may consume a significant amount of USB bandwidth and power.There is therefore a need in the art for techniques to more efficiently perform flow control in USB.SUMMARYTechniques for performing flow control in USB in order to reduce NAK'ed transactions and improve data performance and power efficiency are described herein. For flow control, a USB device may determine its capability to exchange data with a USB host and may send notifications for flow control based on its capability. [0009] In one design, the USB host may (e.g., periodically) send token packets to the USB device to initiate data exchanges with the USB device (e.g., to send data to or receive data from the USB device). The USB device may determine that it is incapable of exchanging data with the USB host, e.g., because there is no data to send or because its buffer is full or near full. The USB device may send a "flow off notification (e.g., on an interrupt pipe) to the USB host to suspend data exchanges. The USB host may receive the flow off notification and suspend sending token packets to the USB device. Thereafter, the USB device may determine that it is capable of exchanging data with the USB host. The USB device may then send a "flow on" notification to the USB host to resume data exchanges. The USB host may receive the flow on notification and resume sending token packets to the USB device. By suspending transmission of token packets during the time that the USB device is incapable of exchanging data, NAK'ed transactions may be reduced or avoided.Various aspects and features of the disclosure are described in further detail below.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 shows a block diagram of a USB host and a USB device. [0012] FIG. 2A shows IN transactions to read data from the USB device. [0013] FIGS. 2B and 2C show OUT transactions to send data to the USB device.FIG. 3 shows transmissions of token packets and NAK'ed transactions without flow control.FIG. 4 shows flow control for USB using notifications sent on an interrupt pipe.FIG. 5 shows a process performed by the USB device for flow control.FIG. 6 shows a process performed by the USB host for flow control.FIG. 7 shows a block diagram of a wireless communication device.DETAILED DESCRIPTIONThe flow control techniques described herein may be used for USB, other buses, polling-based input/output (I/O) systems, and other systems in which data is exchanged between entities. For clarity, the techniques are specifically described below for USB, which is covered in a publicly available document "Universal Serial Bus Specification," Revision 2.0, April 27, 2000. [0020] USB uses the following terminology:- Function - a USB device that provides a capability/task to a USB host,- Endpoint - a source or a sink of information in a communication flow between a USB device (or function) and a USB host,- Pipe - a logical channel between a USB host and an endpoint on a USB device, and- Transaction - delivery of service to an endpoint, which consists of a token packet, optional data packet, and optional handshake packet.A USB device may have one or more functions, e.g., a webcam may have one function for video and another function for sound. Each physical USB device is identified by a unique 7-bit address assigned by a USB host. The USB host may support up to 127 physical USB devices with 127 different addresses of 1 to 127. A function may have one or more endpoints. Each endpoint is identified by a 4-bit endpoint number. For example, a function may have an IN endpoint that sends data to the USB host and an OUT endpoint that receives data from the USB host, where "IN" and "OUT" are from the perspective of the USB host.FIG. 1 shows a block diagram of a design of a USB host 110 and a USB device 120. In this design, USB host 110 includes applications 112, a function driver 114, and a bus driver 116. Applications 112 may comprise any application having data to exchange with USB device(s). Applications 112 may reside on USB host 110, as shown in FIG. 1, or may be part of a computer or some other electronics device within which USB host 110 resides. Function driver 114 manages data exchanges for functions of USB devices coupled to USB host 110. Function driver 114 interfaces with applications 112 and initiates transactions to send and/or receive data for the applications. Bus driver 116 supports packet exchanges with USB devices via a USB wire 130 and performs physical layer processing for the packet exchanges. Bus driver 116 may send and receive packets as directed by function driver 114. [0023] In the design shown in FIG. 1, USB device 120 includes applications 122, a function 124, a USB driver 126, an IN buffer 128a, and an OUT buffer 128b. In general, USB device 120 may have one or more functions. For simplicity, the following description assumes that USB device 120 has a single function. Applications 122 may comprise any application having data to exchange with USB host 110. Function 124 interfaces with applications 122 and supports data exchanges with USB host 110 for the applications. USB driver 126 supports packet exchanges with USB host 110 via USB wire 130 and performs physical layer processing for the packet exchanges. IN buffer 128a stores data to be sent to USB host 110, and OUT buffer 128b stores data received from USB host 110.FIG. 1 shows a specific design of USB host 110 and USB device 120. In general, a USB host may include the same or different modules than those shown in FIG. 1 for USB device 110. A USB device may also include the same or different modules than those shown in FIG. 1 for USB device 120. Each module may be implemented with hardware, firmware, software, or a combination thereof. [0025] USB host 110 may initiate a transaction to receive data from an IN endpoint of function 124 at USB device 120 or to send data to an OUT endpoint of function 124. Different sequences of packets may be exchanged for different types of transaction. USB 2.0 supports three different speed settings - low-speed covering up to 1.5 megabits/second (Mbps), full-speed covering up to 12 Mbps, and high-speed covering up to 480 Mbps. Different sequences of packets may be exchanged for OUT transactions for different speed settings. [0026] FIG. 2 A shows IN transactions to read data from USB device 120 for all three speed settings. For an IN transaction, USB host 110 sends an IN token packet, which is a packet requesting to read data from USB device 120 (step 212). The IN token packet contains the address of USB device 120 and the IN endpoint number. USB device 120 receives the IN token packet, determines that it has data to send and can send the data, and sends a data packet to USB host 110 (step 214). USB host 110 receives the data packet, determines that the packet is received correctly, and sends an acknowledgement (ACK) handshake packet (step 216). Steps 212, 214 and 216 constitute a successful IN transaction.For another IN transaction at a later time, USB host 110 sends an IN token packet to the IN endpoint of function 124 at USB device 120 (step 222). USB device 120 receives the IN token packet, determines that it has no data to send or that it cannot send the data, and sends a NAK handshake packet to USB host 110 (step 224). USB host 110 receives the NAK packet and may retry the IN transaction at a later time. Steps 222 and 224 constitute a NAK' ed IN transaction in which two overhead packets (but no data packet) are exchanged between USB host 110 and USB device 120. [0028] FIG. 2B shows OUT transactions to send data to USB device 120 for low- speed and full-speed. For an OUT transaction, USB host 110 sends an OUT token packet, which is a packet requesting to write data to USB device 120 (step 232). The OUT token packet contains the address of USB device 120 and the OUT endpoint number. USB host 110 then sends a data packet to USB device 120 right after the OUT token packet, without waiting for a reply from USB device 120 (step 234). USB device 120 receives the OUT token packet, receives the data packet, determines that the packet is received correctly, and sends an ACK handshake packet (step 236). Steps 232, 234 and 236 constitute a successful OUT transaction for low- speed or full- speed. [0029] For another OUT transaction at a later time, USB host 110 sends both an OUT token packet and a data packet to the OUT endpoint of function 124 at USB device 120 (steps 242 and 244). USB device 120 receives the OUT token packet and the data packet, determines that it cannot receive data, and sends a NAK handshake packet (step 246). USB host 110 receives the NAK packet and may retry the OUT transaction at a later time. Steps 242, 244 and 246 constitute a NAK' ed OUT transaction in which two overhead packets and a data packet are exchanged between USB host 110 and USB device 120 for an unsuccessful transfer.FIG. 2C shows OUT transactions to send data to USB device 120 for highspeed. For an OUT transaction, USB host 110 sends a PING token packet, which is a packet querying the capability of USB device 120 to receive data (step 252). A PING packet is categorized as a special packet in USB, but is referred to as a token packet herein. The PING token packet in step 252 contains the address of USB device 120 and the OUT endpoint number. USB device 120 receives the PING token packet and sends an ACK handshake packet if it is capable of receiving data (step 254). Upon receiving the ACK, USB host 110 sends an OUT token packet and a data packet to USB device 120 (steps 262 and 264), and USB device 120 returns an ACK or an NYET handshake packet (step 266). Steps 252 to 266 constitute a successful OUT transaction for highspeed.For another OUT transaction at a later time, USB host 110 sends a PING token packet to USB device 120 (step 272). USB device 120 receives the PING token packet, determines that it cannot receive data, and sends a NAK handshake packet (step 274). USB host 110 receives the NAK packet and may retry the OUT transaction at a later time. Steps 272 and 274 constitute a NAK' ed OUT transaction in which two overhead packets are exchanged between USB host 110 and USB device 120 for an unsuccessful transfer.As shown in FIGS. 2A and 2B, USB host 110 controls data exchanges with USB device 120. USB host 110 initiates both IN transactions to read data from USB device 120 and OUT transactions to write data to the USB device. USB host 110 may send IN token packets periodically based on data requirements of USB device 120 and the available USB bandwidth. USB host 110 may send OUT or PING token packets whenever it has data to send to USB device 120. USB host 110 typically blindly sends the IN token packets and does not know a priori whether USB host 110 has any data to send or can send the data. USB host 110 also typically sends the OUT or PING token packets when it has data to send and does not know a priori whether or not USB host 110 can receive data.FIG. 3 shows an example of packet exchanges between USB host 110 and USB device 120. In this example, USB device 120 has an interrupt pipe and a data pipe with USB host 110. In USB, a pipe is typically associated with a specific function, a specific endpoint, and a specific direction. An interrupt pipe may be considered as a signaling channel that may be used to send signaling information, e.g., flow control information. In general, a USB device may or may not have an interrupt pipe. A data pipe may be considered as a data channel that may be used to send data traffic. A data pipe may be a bulk pipe or an isochronous pipe in USB and may be for the IN or OUT direction. A pipe is unidirectional and may carry information for either the IN direction from USB device 120 to USB host 110 or the OUT direction from USB host 110 to USB device 120. In the example shown in FIG. 3, the interrupt pipe and the data pipe are both for the IN direction.An endpoint for the interrupt pipe may specify a desired bus access period for this pipe during setup with USB host 110. The bus access period for the interrupt pipe may be selected from one of the following ranges:- 10 to 255 milliseconds (ms) if the endpoint supports low-speed,- 1 to 255 ms if the endpoint supports full-speed, and- 0.125 to 0.125x2<M> ms if the endpoint supports high-speed, where M ≤ 15.USB host 110 may send IN token packets for the interrupt pipe to USB device 120 at a period of Tinterrupt, which may be equal to or less than the bus access period for the interrupt pipe. Whenever an IN token packet is received for the interrupt pipe, USB device 120 may send either a data packet with control information or a NAK handshake packet to USB host 110, e.g., as shown in FIG. 2A.USB host 110 may send IN token packets for the data pipe to USB device 120 at a period of Tdata, which may be determined based on the data requirements of USB device 120, the available USB bandwidth, etc. Tdata may be much shorter than Tinterrupt and may be on the order of microseconds ([mu]s) for full-speed and high-speed. Thus, USB host 110 may send many (e.g., hundreds of) IN token packets for the data pipe for each IN token packet sent for the interrupt pipe. Whenever an IN token packet is received for the data pipe, USB device 120 may send either a data packet with traffic data or a NAK handshake packet to USB host 110. For simplicity, ACK handshake packets are not shown in FIG. 3. [0037] As shown in FIG. 3, there may be many NAK' ed transactions for the data pipe. The NAK'ed transactions may consume a significant amount of USB bandwidth and may reduce the maximum effective data throughput for other pipes on the USB wire. The NAK'ed transactions may also consume power in USB host 110 and USB device 120 without providing any beneficial result.In an aspect, flow control is performed for USB in order to reduce or avoid NAK'ed transactions. This may improve data performance and power efficiency. For flow control, USB device 120 may determine its capability to exchange data with USB host 110. USB device 120 may send notifications to USB host 110 for flow control based on this determined capability.In general, USB device 120 may send various types of information to USB host 110 for flow control. For example, the following information may be sent for flow control:- Flow off notification - indication to suspend transactions/data exchanges,- Flow on notification - indication to resume transactions/data exchanges,- Data rate - indicate rate of traffic data to exchange,- Buffer size - indicate amount of data to send in the IN or OUT direction,- Token rate - indicate rate of token packets to be sent by USB host 110,- Timeout - used to periodically determine whether or not to enable flow control,- N-shot indication - indication to perform N data transfers and then stop, where N > 1 ,- Control duration - indicate duration for which flow control is applied.Flow control information is sent on different pipe than the pipe being flow controlled. [0040] In one design, flow control is performed based on flow off and flow on notifications, which may also be referred to by other names. In this design, when USB device 120 determines that it is incapable of exchanging data for a particular pipe, USB device 120 sends a flow off notification to USB host 110 to suspend transactions on this pipe. Upon receiving the flow off notification, USB host 110 does not schedule transactions on the pipe, which then avoids waste of USB bandwidth due to NAK'ed transactions. When USB device 120 later determines that it is again capable of exchanging data for the suspended pipe, USB device 120 sends a flow on notification to USB host 110 to resume transactions on this pipe. Upon receiving the flow on notification, USB host 110 resumes transactions on the pipe. In this design, the flow off and flow on notifications are essentially requests to stop and start transactions on the pipe.Flow control may be performed independently for each pipe at USB device 120. Since each pipe is unidirectional, flow control may be performed independently for the IN and OUT directions. Flow control may also be performed for a set of pipes that are "bundled" together, e.g., as a single logical unit.USB device 120 may exchange data with USB host 110 on the upstream and/or downstream. Thus, exchanging data may refer to sending data to USB host 110 in the IN direction (or upstream) or receiving data from USB host 110 in the OUT direction (or downstream). USB device 120 may be incapable of successfully exchanging data with USB host 110 in a given direction for various reasons. For the IN direction, USB device 120 may be incapable of sending data to USB host 110 if there is no data to send, if IN buffer 128a is empty or near empty, if processing resources are unavailable at the USB device, etc. For the OUT direction, USB device 120 may be incapable of receiving data from USB host 110 if OUT buffer 128b is full or near full, if processing resources are unavailable at the USB device, if the CPU of the USB device is occupied with other tasks, etc.For upstream USB transfer in the IN direction, USB device 120 may send a flow off notification for a particular pipe x when USB device 120 does not have any data to send on pipe x to USB host 110. The flow off notification may include information identifying pipe x to USB host 110. USB host 110 may stop requesting for data on pipe x and hence may suspend sending IN token packets for pipe x to USB device 120. When USB device 120 has data available to send on pipe x, USB device 120 may send a flow on notification for pipe x to USB host 110. The flow on notification may include information identifying pipe x to USB host 110. USB host 110 may then resume sending IN token packets for pipe x to USB device 120. [0044] An upstream USB transfer on pipe x may be incomplete at the time the flow off notification was sent for pipe x. In this case, USB device 120 may resume the upstream USB transfer and continue from where it was left off upon sending the flow on notification. Alternatively, USB device 120 may restart the upstream USB transfer from the beginning and retransmit the portion that was sent prior to the flow off notification.For downstream USB transfer in the OUT direction, USB device 120 may send a flow off notification for a particular pipe x when USB device 120 determines that it is unable to receive data on pipe x from USB host 110. For example, OUT buffer 128b at USB device 120 may be full or near full, and USB device 120 may be unable to receive new data at that moment or shortly thereafter. The flow off notification may include information identifying pipe x to USB host 110. USB host 110 may stop sending data on pipe x and hence may suspend sending OUT or PING token packets for pipe x to USB device 120. When USB device 120 is again able to receive data on pipe x, USB device 120 may send a flow on notification for pipe x to USB host 110. The flow on notification may include information identifying pipe x to USB host 110. USB host 110 may then start sending data on pipe x and hence may resume sending OUT or PING token packets for pipe x to USB device 120.A downstream USB transfer on pipe x may be incomplete at the time the flow off notification was sent for pipe x. In this case, then USB host 110 may resume the downstream USB transfer and continue from where it was left off upon receiving the flow on notification. Alternatively, USB host 110 may restart the downstream USB transfer from the beginning and retransmit the portion that was sent prior to the flow off notification.The flow off and flow on notifications may be sent in various manners, e.g., using existing USB messages or new USB messages. In one design, an existing ConnectionSpeedChange notification message defined in a USB Class Definition for Communication Devices is used to convey the flow off and flow on notifications. In this design, a connection speed value of zero may be used to convey the flow off notification and a non-zero value may be used to convey the flow on notification (and possibly the allowable data rate). The ConnectionSpeedChange notification message may be sent on the interrupt pipe. The flow off and flow on notifications may also be conveyed in other existing USB message or a new USB message defined for this purpose.In one design, the flow off and flow on notifications are sent on an interrupt pipe, which is always available while USB device 120 is connected to USB host 110. The interrupt pipe operates in similar manner as a data pipe. However, the bus access period may be much slower for the interrupt pipe than the data pipe, e.g., on the order of milliseconds for the interrupt pipe and on the order of microseconds for a full-speed or high-speed data pipe. Hence, USB host 110 may send token packets at a much slower rate for the interrupt pipe than the data pipe. Whenever USB device 120 receives an IN token packet for the interrupt pipe, USB device 120 may send a flow control notification message, a NAK handshake packet, or some other packet on the interrupt pipe to USB host 110. A flow control notification may be a flow off notification or a flow on notification.FIG. 4 shows a design of flow control for USB using notifications sent on an interrupt pipe. In general, flow control may be performed for the IN direction and/or the OUT direction. In the example shown in FIG. 4, USB device 120 has an interrupt pipe and a data pipe, both of which are for the IN direction.USB host 110 may periodically send IN token packets for the data pipe to USB device 120. USB device 120 may respond to each IN token packet by sending either an ACK handshake packet and a data packet or a NAK handshake packet to USB host 110. At time T1, USB device 120 determines that it has no data to send on the data pipe to USB host 110. USB device 120 then waits for the next IN token packet for the interrupt pipe and, at time T2, sends a flow off notification message on the interrupt pipe to USB host 110. USB host 110 receives the flow off notification and, starting at time T3, suspends sending IN token packets for the data pipe. USB host 110 may periodically send IN token packets for the interrupt pipe, which may be NAK' ed by USB device 120.At time T4, USB device 120 determines that it has data to send on the data pipe to USB host 110. USB device 120 then waits for the next IN token packet for the interrupt pipe and, at time T5, sends a flow on notification message on the interrupt pipe to USB host 110. USB host 110 receives the flow on notification and, starting at time T6, resumes sending IN token packets for the data pipe.As shown in FIG. 4, NAK' ed transactions on the data pipe may be reduced or avoided by sending the flow off notification upon determining that there is no data to send on the data pipe. The bandwidth saved by avoiding NAK' ed transactions on the data pipe may be used for other pipes sharing the USB wire connected to USB host 110. [0053] As shown in FIG. 4, USB device 120 may or may not send a flow control notification whenever an IN token packet is received for the interrupt pipe. Hence, some transactions on the interrupt pipe may be NAK' ed. However, the transaction rate for the interrupt pipe may be much lower than the transaction rate for the data pipe. Hence, much less USB bandwidth may be wasted due to NAK'ed transactions on the interrupt pipe than NAK'ed transactions on the data pipe. Furthermore, the bus access period for the interrupt pipe may be selected to obtain the desired response time for sending notifications while reducing overhead due to NAK'ed transactions. [0054] In the design shown in FIG. 4, USB device 120 is able to send a flow control notification after receiving an IN token packet for the interrupt pipe, instead of at any time. Furthermore, there may be some delay from the time that USB host 110 receives a flow off notification to the time that transactions are suspended on the data pipe. USB device 120 may send flow control notifications in a manner to account for the bus access period of the interrupt pipe and the delay of USB host 110. For a data pipe in the OUT direction, USB device 120 may continue to receive data from USB host 110 until OUT transactions are suspended. USB device 120 may reserve some capacity in OUT buffer 128b in order to avoid NAK'ing OUT transactions during the interim period between the time that the flow off notification is sent to the time that USB host 110 suspends OUT transactions on the data pipe. The amount of reserved buffer capacity may be determined based on the expected length of the interim period, the maximum or average data rate for the data pipe, etc.In the design shown in FIG. 4, flow control is performed based solely on flow off and flow on notifications. In another design, flow control is performed based on data rate, in addition to or in lieu of the flow off and flow on notifications. USB device 120 may send the data rate to USB host 110, which may then send token packets such that the data rate can be achieved. In yet another design, flow control is performed based on buffer size, which is indicative of the amount of data available to send. USB host 110 may send token packets at a rate determined based on the buffer size. In yet another design, flow control is performed based on token rate, in addition to or in lieu of the flow off and flow on notifications. USB host 110 may send token packets at the token rate to USB device 120. In general, flow control may be performed based on any of the parameters listed above (e.g., flow off and flow on notifications, data rate, buffer size, token rate, control duration, etc.) and/or other parameters.FIG. 5 shows a design of a process 500 performed by a USB device for flow control. The capability of the USB device to exchange data with a USB host may be determined (block 512). A first notification for flow control may be sent to the USB host based on the determined capability of the USB device (block 514). A change in capability of the USB device to exchange data may be determined (block 516). A second notification for flow control may be sent to the USB host based on the determined change in capability of the USB device (block 518). The notifications for flow control may be sent for a particular pipe among multiple pipes between the USB device and the USB host.For blocks 512 to 518, a determination may be made that the USB device is incapable of exchanging data with the USB host. A flow off notification may be sent to the USB host to suspend data exchanges. Thereafter, a determination may be made that the USB device is capable of exchanging data with the USB host. A flow on notification may be sent to the USB host to resume data exchanges. The flow off and flow on notifications may correspond to the first and second notifications, respectively. [0058] For a pipe in the IN direction, a determination may be made that the USB device is incapable of sending data to the USB host, e.g., because there is no data to send. A flow off notification may be sent to the USB host, which may then suspend sending IN token packets to the USB device. Thereafter, a determination may be made that the USB device is capable of sending data to the USB host. A flow on notification may be sent to the USB host, which may then resume sending IN token packets to the USB device.For a pipe in the OUT direction, a determination may be made that the USB device is incapable of receiving data from the USB host, e.g., because a buffer at the USB device is full or near full. A flow off notification may be sent to the USB host, which may then suspend sending OUT or PING token packets to the USB device. Some reserved buffer capacity may be used to account for delay by the USB host in suspending OUT or PING token packets after receiving the flow off notification. Thereafter, a determination may be made that the USB device is capable of receiving data from the USB host. A flow on notification may be sent to the USB host, which may then resume sending OUT or PING token packets to the USB device. [0060] The notifications for flow control may be sent on an interrupt pipe to the USB host. An IN token packet for the interrupt pipe may be received from the USB host, and a notification for flow control may be sent on the interrupt pipe after receiving the IN token packet.FIG. 6 shows a design of a process 600 performed by a USB host for flow control. Token packets may be sent to a USB device to initiate data exchanges with the USB device (block 612). These token packets may be IN token packets that request for data from the USB device or OUT or PING token packets that indicate data to send to the USB device. A first notification for flow control may be received from the USB device (block 614). The USB host may alter sending token packets to the USB device in response to the first notification (block 616). For example, the USB host may suspend sending token packets or may send token packets at a slower rate. Thereafter, a second notification for flow control may be received from the USB device (block 618). The USB host may resume sending token packets to the USB device (block 620). [0062] The token packets and the notifications may be for a particular pipe among multiple pipes between the USB device and the USB host. The notifications may be received on an interrupt pipe from the USB device. IN token packets for the interrupt pipe may be sent in accordance with a bus access period. The notifications may be received after the IN token packets for the interrupt pipe.The flow control techniques described herein may be implemented with device-initiated higher-level flow control. The techniques may be implemented within current USB specification using existing message(s) to send flow control notifications. The techniques may be implemented by modifying higher-layer drivers at the USB host and the USB device, which may simplify implementation.The flow control techniques described herein may provide certain advantages. First, NAK' ed transactions may be reduced or avoided with flow control. The saved bandwidth may be re-allocated to other pipes, which may then improve overall data throughput over the USB wire. Second, overall power efficiency may be improved for the USB device and the USB host. [0065] The flow control techniques described herein may be used for USB hosts and USB devices that are commonly used for computers, wireless communication devices, and other electronics devices. The use of the techniques for a wireless device is described below.FIG. 7 shows a block diagram of a design of a wireless communication device 700 in a wireless communication system. Wireless device 700 may be a cellular phone, a terminal, a handset, a PDA, etc. The wireless communication system may be a Code Division Multiple Access (CDMA) system, a Global System for Mobile Communications (GSM) system, etc.Wireless device 700 is capable of providing bi-directional communication via a receive path and a transmit path. On the receive path, signals transmitted by base stations (not shown in FIG. 7) are received by an antenna 712 and provided to a receiver (RCVR) 714. Receiver 714 conditions and digitizes the received signal and provides samples to a digital section 720 for further processing. On the transmit path, a transmitter (TMTR) 716 receives data to be transmitted from digital section 720, processes and conditions the data, and generates a modulated signal, which is transmitted via antenna 712 to the base stations.Digital section 720 includes various processing, interface, and memory units such as, for example, a modem processor 722, a controller/processor 724, an internal memory 726, a graphics processing unit (GPU) 728, a central processing unit (CPU) 730, an external bus interface (EBI) 732, a USB device 734, and a USB host 736. Modem processor 722 may perform processing for data transmission and reception, e.g., encoding, modulation, demodulation, and decoding. Controller/processor 724 may direct the operation of various units within digital section 720. Internal memory 726 may store data and/or instructions for various units within digital section 720. GPU 728 may perform processing for graphics, images, videos, texts, etc. CPU 730 may perform general-purpose processing for various applications at wireless device 700. EBI 732 may facilitate transfer of data between digital section 720 (e.g., internal memory 726) and a main memory 742. USB device 734 may communicate with a USB host 744, which may reside in a laptop computer or some other electronics device. USB host 736 may communicate with a USB device 746, which may be a display unit, a speaker, a webcam, etc. USB device 734 and/or USB host 736 may implement the flow control techniques described herein.Digital section 720 may be implemented with one or more processors. Digital section 720 may also be fabricated on one or more application specific integrated circuits (ASICs) and/or some other type of integrated circuits (ICs). [0070] The flow control techniques described herein may be implemented by various means. For example, these techniques may be implemented in hardware, firmware, software, or a combination thereof. For a hardware implementation, the processing units used to perform flow control at a USB host or a USB device may be implemented within one or more ASICs, digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, micro-controllers, microprocessors, electronic devices, other electronic units designed to perform the functions described herein, a computer, or a combination thereof.For a firmware and/or software implementation, the flow control techniques may be implemented with modules (e.g., procedures, functions, etc.) that perform the functions described herein. The firmware and/or software instructions may be stored in a memory (e.g., memory 726 or 742 in FIG. 7) and executed by a processor (e.g., processor 724). The memory may be implemented within the processor or external to the processor. The firmware and/or software instructions may also be stored in other processor-readable medium such as random access memory (RAM), read-only memory (ROM), non-volatile random access memory (NVRAM), programmable read-only memory (PROM), electrically erasable PROM (EEPROM), FLASH memory, compact disc (CD), magnetic or optical data storage device, etc.An apparatus implementing the techniques described herein may be a standalone unit or may be part of a device. The device may be (i) a stand-alone integrated circuit (IC), (ii) a set of one or more ICs that may include memory ICs for storing data and/or instructions, (iii) an ASIC such as a mobile station modem (MSM), (iv) a module that may be embedded within other devices, (v) a cellular phone, wireless device, handset, or mobile unit, (vi) etc.The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the spirit or scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples and designs described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.WHAT IS CLAIMED IS: |
Disclosed is a system that comprises a memory device and a processing device, operatively coupled with the memory device, to perform operations that include, identifying, by the processing device, a plurality of partitions located on a die of the memory device. The operations performed by the processing device further include selecting, based on evaluating a predefined criterion reflecting a physical layout of the die of the memory device, a first partition and a second partition of the plurality of partitions. The operations performed by the processing device further include generating a codeword comprising first data residing on the first partition and second data residing on the second partition. |
CLAIMSWhat is claimed is:1. A system comprising: a memory device; and a processing device, operatively coupled with the memory device, to perform operations comprising: identifying, by the processing device, a plurality of partitions located on a die of the memory device; selecting, based on evaluating a predefined criterion reflecting a physical layout of the die of the memory device, a first partition and a second partition of the plurality of partitions; and generating a codeword comprising first data residing on the first partition and second data residing on the second partition.2. The system of claim 1, wherein evaluating the predefined criterion further comprises: comparing a physical address of the first partition to a physical address of the second partition.3. The system of claim 1, wherein selecting the first partition and the second partition of the plurality of partitions is further based on evaluating an error rate distribution of the die of the memory device.4. The system of claim 3, wherein evaluating the error rate distribution of the die of the memory device comprises: comparing a first value of a data state metric for the first partition to a second value of a data state metric for the second partition.5. The system of claim 3, wherein evaluating the error rate distribution of the die of the memory device comprises: comparing a first raw bit error rate for the first partition to a second raw bit error rate for the second partition; and responsive to determining a difference between the first raw bit error rate and the second raw bit error rate, generating the codeword.6. The system of claim 3, wherein evaluating the error rate distribution of the die of the memory device is based on on-chip real-time measuring data.7. The system of claim 3, wherein evaluating the error rate distribution of the die of the memory device is based on bit error counts measured during chip development.8. A method comprising: identifying, by a processing device, a plurality of partitions located on a die of a memory device; selecting, based on evaluating a predefined criterion reflecting a physical layout of the die of the memory device, a first partition and a second partition of the plurality of partitions; and generating a codeword comprising first data residing on the first partition and second data residing on the second partition.9. The method of claim 8, wherein evaluating the predefined criterion further comprises: comparing a physical address of the first partition to a physical address of the second partition.10. The method of claim 8, wherein selecting the first partition and the second partition of the plurality of partitions is further based on evaluating an error rate distribution of the die of the memory device.11. The method of claim 10, wherein evaluating the error rate distribution of the die of the memory device comprises: comparing a first value of a data state metric for the first partition to a second value of a data state metric for the second partition.12. The method of claim 10, wherein evaluating the error rate distribution of the die of the memory device comprises: comparing a first raw bit error rate for the first partition to a second raw bit error rate for the second partition; and responsive to determining a difference between the first raw bit error rate and the second raw bit error rate, generating the codeword.13. The method of claim 10, wherein evaluating the error rate distribution of the die of the memory device is based on on-chip real-time measuring data.14. The method of claim 10, wherein evaluating the error rate distribution of the die of the memory device is based on bit error counts measured during chip development.15. A non-transitory computer readable medium comprising instructions, which when executed by a processing device, cause the processing device to perform operations comprising: identifying a plurality of partitions located on a die of a memory device; evaluating a predefined criterion reflecting a physical layout of the die of the memory device; evaluating an error distribution rate of the die of the memory device; selecting, based on the predefined criterion and the error distribution rate, a first partition and a second partition of the plurality of partitions; and generating a codeword comprising first data residing on the first partition and second data residing on the second partition.16. The non-transitory computer readable medium of claim 15, wherein evaluating the predefined criterion further comprises: comparing a physical address of the first partition to a physical address of the second partition.17. The non-transitory computer readable medium of claim 15, wherein evaluating the error rate distribution of the die of the memory device comprises: comparing a first value of a data state metric for the first partition to a second value of a data state metric for the second partition.18. The non-transitory computer readable medium of claim 15, wherein evaluating the error rate distribution of the die of the memory device comprises: comparing a first raw bit error rate for the first partition to a second raw bit error rate for the second partition; and responsive to determining a difference between the first raw bit error rate and the second raw bit error rate, generating the codeword.19. The non-transitory computer-readable storage medium of claim 15, wherein evaluating the error rate distribution of the die of the memory device is based on on-chip real time measuring data.20. The non-transitory computer-readable storage medium of claim 15, wherein evaluating the error rate distribution of the die of the memory device is based on bit error counts measured during chip development. |
CODEWORD ERROR LEVELING FOR 3DXP MEMORY DEVICESTECHNICAL FIELD[001] Embodiments of the disclosure relate generally to memory sub-systems, and more specifically, relate to codeword error leveling for 3DXP memory devices.BACKGROUND[002] A memory sub-system can include one or more memory devices that store data. The memory devices can be, for example, non-volatile memory devices and volatile memory devices. In general, a host system can utilize a memory sub-system to store data at the memory devices and to retrieve data from the memory devices.BRIEF DESCRIPTION OF THE DRAWINGS[003] The disclosure will be understood more fully from the detailed description given below and from the accompanying drawings of various embodiments of the disclosure. The drawings, however, should not be taken to limit the disclosure to the specific embodiments, but are for explanation and understanding only.[004] FIG. 1 illustrates an example computing system that includes a memory sub system in accordance with some embodiments of the present disclosure.[005] FIG. 2 illustrates an example physical layout of partitions of a memory device. [006] FIG. 3 illustrates another example physical layout of partitions of a memory device.[007] FIG. 4 is a flow diagram of an example method 400 to generate codewords for a memory device in accordance with some embodiments of the present disclosure.[008] FIG. 5 is a flow diagram of an example method 500 to generate codewords for a memory device in accordance with some embodiments of the present disclosure.[009] FIG. 6 is a block diagram of an example computer system in which embodiments of the present disclosure may operate.DETAILED DESCRIPTION[0010] Aspects of the present disclosure are directed to codeword error leveling for 3DXP memory devices. A memory sub-system can be a storage device, a memory module, or a combination of a storage device and memory module. Examples of storage devices and memory modules are described below in conjunction with FIG. 1. In general, a host system can utilize a memory sub -system that includes one or more components, such as memory
devices that store data. The host system can provide data to be stored at the memory sub system and can request data to be retrieved from the memory sub-system.[0011] A memory sub-system can include high density non-volatile memory devices where retention of data is desired when no power is supplied to the memory device. One example of non-volatile memory devices is three-dimensional cross-point (“3D cross-point” or “3DXP”) memory devices that are a cross-point array of non-volatile memory that can perform bit storage based on a change of bulk resistance, in conjunction with a stackable cross-gridded data access array. Another example of non-volatile memory devices is a negative-and (NAND) memory device. Other examples of non-volatile memory devices are described below in conjunction with FIG. 1. A non-volatile memory device is a package of one or more dies. Each die can consist of one or more planes. For some types of non-volatile memory devices (e.g., 3DXP devices), each plane consists of a set of physical blocks. Each block consists of a set of pages. Each page consists of a set of memory cells ("cells"). A cell is an electronic circuit that stores information. Depending on the cell type, a cell can store one or more bits of binary information, and has various logic states that correlate to the number of bits being stored. The logic states can be represented by binary values, such as “0” and “1”, or combinations of such values.[0012] Partitioning can refer to a process where memory is divided up into sections (e.g., partitions) for use by one or more applications, processes, operations, etc. A memory device can be segmented into two or more partitions. A partition can be individually addressable and can contain information related to a specific application, process, operation, etc.[0013] Data operations can be performed by the memory sub-system. The data operations can be host-initiated operations. For example, the host system can initiate a data operation (e.g., write, read, erase, etc.) on a memory sub-system. The host system can access requests (e.g., write command, read command) to the memory sub-system, such as to store data on a memory device at the memory sub-system and to read data from the memory device on the memory sub-system. The data to be read or written, as specified by a host request, is hereinafter referred to as “host data.” A host request can include logical address information (e.g., logical block address (LBA), namespace) for the host data, which is the location the host system associates with the host data. The logical address information (e.g., LBA, namespace) can be part of metadata for the host data. Metadata can also include error handling data (e.g., ECC codeword, parity code), data version (e.g., used to distinguish age of data written), valid bitmap (which LB As or logical transfer units contain valid data), etc.
[0014] A memory cell can be programmed (written to) by applying a certain voltage to the memory cell, which results in an electric charge being held by the memory cell. Certain voltages can be applied to memory cells through a power bus connected to a periphery circuit of the memory device. Given the physical layout of the memory device, partitions can be located at different distances from the periphery circuit. Due to this layout constraint, there can be a power drop or delay in reaching certain partitions of the memory device, which can result in voltage differences among the partitions. Voltage differences among the partitions can result in differences in the raw bit error rates (RBER) of each partition. For example, one partition can have a high RBER whereas another partition can have a low RBER.[0015] The memory sub-system may encode data into a format for storage at the memory device(s). For example, a class of error detection and correcting codes (ECC) may be used to encode the data. Encoded data written to physical memory cells of a memory device can be referred to as a codeword. The codeword may include one or more of user data, error correcting code, metadata, or other information. The memory sub-system may consist of one or more codewords. The codewords may consist of data from one or more partitions.[0016] In some memory sub-systems, codewords can include data from neighboring partitions. Since neighboring partitions are located around the same physical area of the memory device, the RBER associated with each of these partitions will be similar. Thus, if, for example, partitions 0 to 9 are each associated with a high RBER, then the codeword consisting of data from partitions 0 to 9 will also be associated with a high RBER. Another codeword consisting of data from partitions located at a different physical area of the memory device may be associated with a low RBER if each of the partitions are associated with a low RBER. Thus, two or more codewords can thus highly vary in their respective RBERs, which can result in performance issues, such as memory uncorrectable error correction code (UECC) errors.[0017] Aspects of the present disclosure address the above and other deficiencies by having a memory sub-system that generates codewords that would include data from different physical locations of the memory device, thus providing more location diversity coverage and reducing RBER variation among codewords. For example, instead of generating a codeword with data from neighboring partitions, such as partition 0 to partition 5, where the RBER associated with each partition may be similar, the codeword can be generated with data from partitions located at different physical locations of the memory device, thus covering partitions associated with different RBERs. Since the partitions are from different physical locations of the memory device and thus are associated with different RBERs, the resulting
codewords would no longer exhibit varying levels of RBER. Instead, the RBER level variation would be reduced among the codewords. In another example, a codeword can be constructed after determining the RBER associated with each partition. Thus, the codeword can be generated using partitions that are not all associated with high RBERs or are not all associated with low RBERs. Therefore, the RBER level variation may consequently be reduced among generated codewords.[0018] Advantages of the present disclosure include, but are not limited to, significantly reducing the codeword-to-codeword RBER level variation, thus reducing the possibility of UECC errors and increasing performance due to the reduced RBER level variation among codewords.[0019] FIG. 1 illustrates an example computing system 100 that includes a memory sub system 110 in accordance with some embodiments of the present disclosure. The memory sub-system 110 can include media, such as one or more volatile memory devices (e.g., memory device 140), one or more non-volatile memory devices (e.g., memory device 130), or a combination of such.[0020] A memory sub-system 110 can be a storage device, a memory module, or a combination of a storage device and memory module. Examples of a storage device include a solid-state drive (SSD), a flash drive, a universal serial bus (USB) flash drive, an embedded Multi -Media Controller (eMMC) drive, a Universal Flash Storage (UFS) drive, a secure digital (SD) card, and a hard disk drive (HDD). Examples of memory modules include a dual in-line memory module (DIMM), a small outline DIMM (SO-DIMM), and various types of non-volatile dual in-line memory modules (NVDIMMs).[0021] The computing system 100 can be a computing device such as a desktop computer, laptop computer, network server, mobile device, a vehicle (e.g., airplane, drone, train, automobile, or other conveyance), Internet of Things (IoT) enabled device, embedded computer (e.g., one included in a vehicle, industrial equipment, or a networked commercial device), or such computing device that includes memory and a processing device.[0022] The computing system 100 can include a host system 120 that is coupled to one or more memory sub-systems 110. In some embodiments, the host system 120 is coupled to multiple memory sub-systems 110 of different types. FIG. 1 illustrates one example of a host system 120 coupled to one memory sub-system 110. As used herein, “coupled to” or “coupled with” generally refers to a connection between components, which can be an indirect communicative connection or direct communicative connection (e.g., without intervening components), whether wired or wireless, including connections such as electrical,
optical, magnetic, etc.[0023] The host system 120 can include a processor chipset and a software stack executed by the processor chipset. The processor chipset can include one or more cores, one or more caches, a memory controller (e.g., NVDIMM controller), and a storage protocol controller (e.g., PCIe controller, SATA controller). The host system 120 uses the memory sub-system 110, for example, to write data to the memory sub-system 110 and read data from the memory sub-system 110.[0024] The host system 120 can be coupled to the memory sub-system 110 via a physical host interface. Examples of a physical host interface include, but are not limited to, a serial advanced technology attachment (SATA) interface, a peripheral component interconnect express (PCIe) interface, universal serial bus (USB) interface, Fibre Channel, Serial Attached SCSI (SAS), a double data rate (DDR) memory bus, Small Computer System Interface (SCSI), a dual in-line memory module (DIMM) interface (e.g., DIMM socket interface that supports Double Data Rate (DDR)), etc. The physical host interface can be used to transmit data between the host system 120 and the memory sub-system 110. The host system 120 can further utilize an NVM Express (NVMe) interface to access components (e.g., memory devices 130) when the memory sub-system 110 is coupled with the host system 120 by the physical host interface (e.g., PCIe bus). The physical host interface can provide an interface for passing control, address, data, and other signals between the memory sub-system 110 and the host system 120. FIG. 1 illustrates a memory sub-system 110 as an example. In general, the host system 120 can access multiple memory sub-systems via a same communication connection, multiple separate communication connections, and/or a combination of communication connections.[0025] The memory devices 130, 140 can include any combination of the different types of non-volatile memory devices and/or volatile memory devices. The volatile memory devices (e.g., memory device 140) can be, but are not limited to, random access memory (RAM), such as dynamic random access memory (DRAM) and synchronous dynamic random access memory (SDRAM).[0026] Some examples of non-volatile memory devices (e.g., memory device 130) include a negative-and (NAND) type flash memory and write-in-place memory, such as a three-dimensional cross-point (“3D cross-point”) memory device, which is a cross-point array of non-volatile memory cells. A cross-point array of non-volatile memory cells can perform bit storage based on a change of bulk resistance, in conjunction with a stackable cross-gridded data access array. Additionally, in contrast to many flash-based memories,
cross-point non-volatile memory can perform a write in-place operation, where a non-volatile memory cell can be programmed without the non-volatile memory cell being previously erased. NAND type flash memory includes, for example, two-dimensional NAND (2D NAND) and three-dimensional NAND (3D NAND).[0027] Each of the memory devices 130 can include one or more arrays of memory cells. One type of memory cell, for example, single level cells (SLC) can store one bit per cell. Other types of memory cells, such as multi-level cells (MLCs), triple level cells (TLCs), quad-level cells (QLCs), and penta-level cells (PLCs) can store multiple bits per cell. In some embodiments, each of the memory devices 130 can include one or more arrays of memory cells such as SLCs, MLCs, TLCs, QLCs, PLCs or any combination of such. In some embodiments, a particular memory device can include an SLC portion, and an MLC portion, a TLC portion, a QLC portion, or a PLC portion of memory cells. The memory cells of the memory devices 130 can be grouped as pages that can refer to a logical unit of the memory device used to store data. With some types of memory (e.g., NAND), pages can be grouped to form blocks.[0028] Although non-volatile memory components such as a 3D cross-point array of non volatile memory cells and NAND type flash memory (e.g., 2D NAND, 3D NAND) are described, the memory device 130 can be based on any other type of non-volatile memory, such as read-only memory (ROM), phase change memory (PCM), self-selecting memory, other chalcogenide based memories, ferroelectric transistor random-access memory (FeTRAM), ferroelectric random access memory (FeRAM), magneto random access memory (MRAM), Spin Transfer Torque (STT)-MRAM, conductive bridging RAM (CBRAM), resistive random access memory (RRAM), oxide based RRAM (OxRAM), negative-or (NOR) flash memory, or electrically erasable programmable read-only memory (EEPROM). [0029] A memory sub-system controller 115 (or controller 115 for simplicity) can communicate with the memory devices 130 to perform operations such as reading data, writing data, or erasing data at the memory devices 130 and other such operations. The memory sub-system controller 115 can include hardware such as one or more integrated circuits and/or discrete components, a buffer memory, or a combination thereof. The hardware can include a digital circuitry with dedicated (i.e., hard-coded) logic to perform the operations described herein. The memory sub-system controller 115 can be a microcontroller, special purpose logic circuitry (e.g., a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), or other suitable processor.
[0030] The memory sub-system controller 115 can include a processing device, which includes one or more processors (e.g., processor 117), configured to execute instructions stored in a local memory 119. In the illustrated example, the local memory 119 of the memory sub-system controller 115 includes an embedded memory configured to store instructions for performing various processes, operations, logic flows, and routines that control operation of the memory sub-system 110, including handling communications between the memory sub-system 110 and the host system 120.[0031] In some embodiments, the local memory 119 can include memory registers storing memory pointers, fetched data, etc. The local memory 119 can also include read-only memory (ROM) for storing micro-code. While the example memory sub-system 110 in FIG. 1 has been illustrated as including the memory sub-system controller 115, in another embodiment of the present disclosure, a memory sub-system 110 does not include a memory sub-system controller 115, and can instead rely upon external control (e.g., provided by an external host, or by a processor or controller separate from the memory sub-system).[0032] In general, the memory sub-system controller 115 can receive commands or operations from the host system 120 and can convert the commands or operations into instructions or appropriate commands to achieve the desired access to the memory devices 130. The memory sub-system controller 115 can be responsible for other operations such as wear leveling operations, garbage collection operations, error detection and error-correcting code (ECC) operations, encryption operations, caching operations, and address translations between a logical address (e.g., a logical block address (LBA), namespace) and a physical address (e.g., physical block address) that are associated with the memory devices 130. The memory sub-system controller 115 can further include host interface circuitry to communicate with the host system 120 via the physical host interface. The host interface circuitry can convert the commands received from the host system into command instructions to access the memory devices 130 as well as convert responses associated with the memory devices 130 into information for the host system 120.[0033] The memory sub-system 110 can also include additional circuitry or components that are not illustrated. In some embodiments, the memory sub-system 110 can include a cache or buffer (e.g., DRAM) and address circuitry (e.g., a row decoder and a column decoder) that can receive an address from the memory sub-system controller 115 and decode the address to access the memory devices 130.[0034] In some embodiments, the memory devices 130 include local media controllers 135 that operate in conjunction with memory sub-system controller 115 to execute operations
on one or more memory cells of the memory devices 130. An external controller (e.g., memory sub-system controller 115) can externally manage the memory device 130 (e.g., perform media management operations on the memory device 130). In some embodiments, memory sub-system 110 is a managed memory device, which is a raw memory device 130 having control logic (e.g., local media controller 135) on the die and a controller (e.g., memory sub-system controller 115) for media management within the same memory device package. An example of a managed memory device is a managed NAND (MNAND) device. [0035] The memory sub-system 110 includes a codeword generator component 113 that can generate codewords using partitions from different physical locations of a memory device or based on the RBER associated with each partition of the memory device. In some embodiments, the memory sub-system controller 115 includes at least a portion of the codeword generator component 113. In some embodiments, the codeword generator component 113 is part of the host system 110, an application, or an operating system. In other embodiments, local media controller 135 includes at least a portion of the codeword generator component 113 and is configured to perform the functionality described herein. [0036] In one example, the codeword generator component 113 can generate a codeword using partitions from different physical locations of a die of a memory device. For example, the codeword generator component 113 can compare a physical address of one partition to a physical address of another partition. If the partitions are not neighboring partitions based on the physical addresses of the partitions, then data from the partitions can be used in generating the codeword. In another example, the codeword generator component 113 can generate a codeword using partitions with different RBERs. For example, the codeword generator component 113 can determine the RBER for each partition in a die of a memory device. Using the determined RBER for each partition, the codeword generator component 113 can generate a codeword with data from partitions with different RBERs. Thus, the codeword generator component 113 may reduce the codeword level RBER variation in a memory device, resulting in an improvement in the performance of the memory device. Further details with regards to the operations of the codeword generator component 113 are described below.[0037] FIG. 2 illustrates an example physical layout of partitions of a memory device. As described above, a conventional method of generating codewords is by using data from neighboring partitions in a die of a memory device. For example, in certain implementations, a codeword may be generated using PA0, PA1, PA2, PA3, PA4, and PA5, representing six neighboring partitions, as depicted in FIG. 2. PA0 to PA3 may be closer in distance to a
periphery circuit 210 than, for example, PA12 to PA15. This is due to the physical layout of the memory device. Given this difference in distance, when a power bus delivers power to the different partitions of the memory device, there may be a drop in power in the partitions that are farther away from the periphery circuit 210. Due to this power drop, there may be a higher RBER in partitions closer to the periphery circuit 210 as compared to partitions that are located farther away from the periphery circuit 210 and thus exhibit a lower RBER. Therefore, generating codewords using data from neighboring partitions can result in codewords with varying levels of RBER. As discussed above, this can result in reduced performance due to an increase in UECC errors.[0038] Accordingly, FIG. 3 illustrates another example physical layout of partitions of a memory device. As shown in FIG. 3, in certain implementations, a codeword may be generated (e.g., by the codeword generator component 113 of FIG. 1) using PA0, PA1, PA6, PA11, PA12, and PA13, representing six partitions from differing locations on a die of the memory device. Since the partitions are located at different distances from a periphery circuit 310, the RBER may be varied among the partitions, thus avoiding extremely high RBER values which may result from selecting partitions with all high RBERs. Further details with regards to the operations of the codeword generator component 113 are described below. [0039] FIG. 4 is a flow diagram of an example method 400 to generate codewords for a memory device, in accordance with some embodiments of the present disclosure. The method 400 can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the method 400 is performed by the codeword generator component 113 of FIG. 1. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel.Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible.[0040] At operation 405, the processing logic identifies multiple partitions. The multiple partitions may be located on a die of a memory device. In an illustrative example, identifying the multiple partitions may include identifying multiple partitions residing on a randomly selected die of the memory device. In certain implementations, identifying the multiple partitions may be in response to a request, by the memory sub-system (e.g., the memory sub-
system controller 115 of FIG. 1), to generate a codeword. In certain implementations, the memory device may be a 3DXP memory device.[0041] At operation 410, the processing logic selects, based on evaluating a predefined criterion reflecting a physical layout of the die of the memory device, two partitions of the multiple partitions. In certain implementations, evaluating the predefined criterion reflecting the physical layout of the die of the memory device includes comparing a physical address of one of the selected partitions to a physical address of the other selected partition. In an illustrative example, the processing logic may compare the physical address of the first selected partition to the physical address of the second selected partition. In response to determining that the physical address of the first selected partition and the physical address of the second selected partition are not closely related in physical location, the processing logic may generate a codeword. Closely related physical addresses may include, but are not limited to, consecutive physical addresses. Closely related physical addresses may include physical addresses of partitions associated with the same or similar RBERs.[0042] In certain implementations, selecting the two partitions of the multiple partitions is based on evaluating an error rate distribution of the die of the memory device. For example, evaluating the error rate distribution of the die of the memory device may include comparing a value of a data state metric for the first selected partition to a value of a data state metric for the second selected partition. In certain implementations, the data state metric includes a raw bit error rate (RBER) associated with each partition. In an illustrative example, the processing logic may compare the RBER of the first selected partition to the RBER of the second selected partition. In response to determining a difference between the RBER of the first selected partition and the RBER of the second selected partition, the processing logic may generate a codeword. For example, the difference between the RBER of the first selected partition and the RBER of the second selected partition may include identifying a high RBER for the first selected partition and a low RBER for the second selected partition. In certain implementations, evaluating the error rate distribution of the die of the memory device may be based on data measured in real time on the memory device. For example, the memory sub system may determine the RBER of each partition residing on a die of the memory device. The metadata reflecting the measured RBER levels may be stored in a metadata structure, such as a table, on the memory device (e.g., the memory devices 130 or 140 of FIG. 1). In certain implementations, evaluating the error rate distribution of the die of the memory device may be based on bit error counts measured during the development of the memory device. In certain implementations, selecting the two partitions of the multiple partitions may include
both evaluating the predefined criterion reflecting the physical layout of the die of the memory device and evaluating the error rate distribution of the die of the memory device. [0043] At operation 415, the processing logic generates a codeword comprising data residing on the first selected partition and data residing on the second selected partition. In an illustrative example, generating the codeword includes identifying the data residing on the first selected partition and the data residing on the second selected partition. In certain implementations, generating the codeword may further include data residing on a certain number of partitions (e.g., six partitions) of the multiple partitions.[0044] FIG. 5 is a flow diagram of an example method 500 to generate codewords for a memory device, in accordance with some embodiments of the present disclosure. The method 500 can be performed by processing logic that can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof. In some embodiments, the method 500 is performed by the codeword generator component 113 of FIG. 1. Although shown in a particular sequence or order, unless otherwise specified, the order of the processes can be modified. Thus, the illustrated embodiments should be understood only as examples, and the illustrated processes can be performed in a different order, and some processes can be performed in parallel.Additionally, one or more processes can be omitted in various embodiments. Thus, not all processes are required in every embodiment. Other process flows are possible.[0045] At operation 505, the processing logic identifies multiple partitions. The multiple partitions may be located on a die of a memory device. In an illustrative example, identifying the multiple partitions may include identifying multiple partitions residing on a randomly selected die of the memory device. In certain implementations, identifying the multiple partitions may be in response to a request, by the memory sub-system (e.g., the memory sub system controller 115 of FIG. 1), to generate a codeword. In certain implementations, the memory device may be a 3DXP memory device.[0046] At operation 510, the processing logic evaluates a predefined criterion reflecting a physical layout of the die of the memory device. In certain implementations, evaluating the predefined criterion reflecting the physical layout of the die of the memory device includes comparing a physical address of one of the partitions in the die to a physical address of another partition in the die. In an illustrative example, the processing logic may compare the physical address of the first partition in the die to the physical address of the second partition in the die. In response to determining that the physical address of the first partition and the
physical address of the second partition are not closely related in physical location, the processing logic may generate a codeword. Closely related physical addresses may include, but are not limited to, consecutive physical addresses. Closely related physical addresses may include physical addresses of partitions associated with the same or similar RBERs.[0047] At operation 515, the processing logic evaluates an error distribution rate of the die of the memory device. For example, evaluating the error rate distribution of the die of the memory device may include comparing a value of a data state metric for the first partition of the die to a value of a data state metric for the second partition of the die. In certain implementations, the data state metric includes a raw bit error rate (RBER) associated with each partition. In an illustrative example, the processing logic may compare the RBER of the first partition to the RBER of the second partition. In response to determining a difference between the first RBER and the second RBER, the processing logic may generate a codeword. For example, the difference between the RBER of the first partition and the RBER of the second partition may include identifying a high RBER for the first partition and a low RBER for the second partition. In certain implementations, evaluating the error rate distribution of the die of the memory device may be based on data measured in real time on the memory device. For example, the memory sub-system may determine the RBER of each partition residing on a die of the memory device. The metadata reflecting the measured RBER levels may be stored in a metadata structure, such as a table, on the memory device (e.g., the memory devices 130 or 140 of FIG. 1). In certain implementations, evaluating the error rate distribution of the die of the memory device may be based on bit error counts measured during the development of the memory device.[0048] At operation 520, the processing logic selects, based on the predefined criterion reflecting the physical layout of the die and the error distribution rate of the die, two partitions of the multiple partitions. In certain implementations, the processing logic selects one partition of the multiple partitions based on the predefined criterion reflecting the physical layout of the die, and the processing logic selects the other partition of the multiple partitions based on the error rate distribution of the die.[0049] At operation 525, the processing logic generates a codeword comprising data residing on the first selected partition and data residing on the second selected partition. In an illustrative example, generating the codeword includes identifying the data residing on the first selected partition and the data residing on the second selected partition. In certain implementations, generating the codeword may further include data residing on a certain number of partitions (e.g., six partitions) of the multiple partitions.
[0050] FIG. 6 illustrates an example machine of a computer system 600 within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, can be executed. In some embodiments, the computer system 600 can correspond to a host system (e.g., the host system 120 of FIG. 1) that includes, is coupled to, or utilizes a memory sub-system (e.g., the memory sub-system 110 of FIG. 1) or can be used to perform the operations of a controller (e.g., to execute an operating system to perform operations corresponding to the codeword generator component 113 of FIG. 1). In alternative embodiments, the machine can be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, and/or the Internet. The machine can operate in the capacity of a server or a client machine in client-server network environment, as a peer machine in a peer- to-peer (or distributed) network environment, or as a server or a client machine in a cloud computing infrastructure or environment.[0051] The machine can be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, a switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.[0052] The example computer system 600 includes a processing device 602, a main memory 604 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or RDRAM, etc.), a static memory 606 (e.g., flash memory, static random access memory (SRAM), etc.), and a data storage system 618, which communicate with each other via a bus 630.[0053] Processing device 602 represents one or more general-purpose processing devices such as a microprocessor, a central processing unit, or the like. More particularly, the processing device can be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device 602 can also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device 602 is configured to execute instructions 626 for performing the operations and steps discussed herein. The computer system 600 can further
include a network interface device 608 to communicate over the network 620.[0054] The data storage system 618 can include a machine-readable storage medium 624 (also known as a computer-readable medium) on which is stored one or more sets of instructions 626 or software embodying any one or more of the methodologies or functions described herein. The instructions 626 can also reside, completely or at least partially, within the main memory 604 and/or within the processing device 602 during execution thereof by the computer system 600, the main memory 604 and the processing device 602 also constituting machine-readable storage media. The machine-readable storage medium 624, data storage system 618, and/or main memory 604 can correspond to the memory sub-system 110 of FIG. 1.[0055] In one embodiment, the instructions 626 include instructions to implement functionality corresponding to a codeword generator component (e.g., the codeword generator component 113 of FIG. 1). While the machine-readable storage medium 624 is shown in an example embodiment to be a single medium, the term “machine-readable storage medium” should be taken to include a single medium or multiple media that store the one or more sets of instructions. The term “machine-readable storage medium” shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure. The term “machine-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media.[0056] Some portions of the preceding detailed descriptions have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the ways used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.[0057] It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied
to these quantities. The present disclosure can refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage systems.[0058] The present disclosure also relates to an apparatus for performing the operations herein. This apparatus can be specially constructed for the intended purposes, or it can include a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program can be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus. [0059] The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems can be used with programs in accordance with the teachings herein, or it can prove convenient to construct a more specialized apparatus to perform the method. The structure for a variety of these systems will appear as set forth in the description below. In addition, the present disclosure is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages can be used to implement the teachings of the disclosure as described herein.[0060] The present disclosure can be provided as a computer program product, or software, that can include a machine-readable medium having stored thereon instructions, which can be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g., a computer). In some embodiments, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium such as a read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory components, etc.[0061] In the foregoing specification, embodiments of the disclosure have been described with reference to specific example embodiments thereof. It will be evident that various modifications can be made thereto without departing from the broader spirit and scope of embodiments of the disclosure as set forth in the following claims. The specification and
drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense. |
Methods, systems, computer-readable media, and apparatuses for gesture detection using ultrasound beamforming are presented. In some embodiments, a method for gesture detection utilizing ultrasound beamforming includes projecting an ultrasound wave parallel to a surface, wherein the ultrasound wave is projected utilizing ultrasound beamforming. The method further includes receiving an ultrasound echo from an object in contact with the surface. The method additionally includes interpreting a gesture based at least in part on the received ultrasound echo. |
WHAT IS CLAIMED IS:1. A method for gesture detection, comprising:projecting an ultrasound wave parallel to a surface, wherein the ultrasound wave is projected utilizing ultrasound beamforming;receiving an ultrasound echo from an object in contact with the surface; andinterpreting a gesture based at least in part on the received ultrasound echo.2. The method of claim 1 further comprising converting the interpreted gesture into a digital image, wherein the digital image is a representation of the interpreted gesture.3. The method of claim 1 further comprising executing an instruction based at least in part on the interpreting step.4. The method of claim 1 wherein the object comprises a user extremity.5. The method of claim 1 wherein the projecting further comprises creating a 2-D gesture scanning area on the surface.6. The method of claim 5 wherein the 2-D gesture scanning area is defined based at least in part on a frequency or strength of the projected ultrasound wave.7. The method of claim 1 wherein the projecting further comprises projecting the ultrasound wave parallel to the surface at a distance of 5mm or less.8. An apparatus for gesture detection, comprising: an ultrasound transducer array configured to:project an ultrasound wave parallel to a surface, wherein the ultrasound wave is projected utilizing ultrasound beamforming;receive an ultrasound echo from an object in contact with the surface; and a processor coupled to the ultrasound transducer configured to interpret a gesture based at least in part on the received ultrasound echo.9. The apparatus of claim 8 wherein the processor is further configured to convert the interpreted gesture into a digital image, wherein the digital image is a representation of the interpreted gesture.10. The apparatus of claim 8 wherein the processor is further configured to execute an instruction based at least in part on the interpreting step.1 1. The apparatus of claim 8 wherein the object comprises a user extremity.12. The apparatus of claim 8 wherein the projecting further comprises creating a 2-D gesture scanning area on the surface.13. The apparatus of claim 12 wherein the 2-D gesture scanning area is defined based at least in part on a frequency or strength of the projected ultrasound wave.14. The apparatus of claim 8 wherein the projecting further comprises projecting the ultrasound wave parallel to the surface at a distance of 5mm or less.15. An apparatus for gesture detection, comprising: means for projecting an ultrasound wave parallel to a surface, wherein the ultrasound wave is projected utilizing ultrasound beamforming;means for receiving an ultrasound echo from an object in contact with the surface; andmeans for interpreting a gesture based at least in part on the received ultrasound echo.16. The apparatus of claim 15 further comprising means for converting the interpreted gesture into a digital image, wherein the digital image is a representation of the interpreted gesture.17. The apparatus of claim 15 further comprising executing an instruction based at least in part on the interpreting step.18. The apparatus of claim 15 wherein the object comprises a user extremity.19. The apparatus of claim 15 wherein the projecting further comprises creating a 2-D gesture scanning area on the surface.20. The apparatus of claim 19 wherein the 2-D gesture scanning area is defined based at least in part on a frequency or strength of the projected ultrasound wave.21. The apparatus of claim 15 wherein the projecting further comprises projecting the ultrasound wave parallel to the surface at a distance of 5mm or less.22. A processor-readable non-transitory medium comprising processor readable instructions configured to cause a processor to:project an ultrasound wave parallel to a surface, wherein the ultrasound wave is projected utilizing ultrasound beamforming;receive an ultrasound echo from an object in contact with the surface; andinterpret a gesture based at least in part on the received ultrasound echo.23. The processor-readable non-transitory medium of claim 22 wherein the instructions are further configured to cause the processor to convert the interpreted gesture into a digital image, wherein the digital image is a representation of the interpreted gesture.24. The processor-readable non-transitory medium of claim 22 wherein the instructions are further configured to cause the processor to execute an instruction based at least in part on the interpreting step.25. The processor-readable non-transitory medium of claim 22 wherein the object comprises a user extremity.26. The processor-readable non-transitory medium of claim 22 wherein the projecting further comprises creating a 2-D gesture scanning area on the surface.27. The processor-readable non-transitory medium of claim 26 wherein the 2-D gesture scanning area is defined based at least in part on a frequency or strength of the projected ultrasound wave.28. The processor-readable non-transitory medium of claim 22 wherein the projecting further comprises projecting the ultrasound wave parallel to the surface at a distance of 5mm or less. |
SYSTEM AND METHOD FOR MULTI-TOUCH GESTURE DETECTION USING ULTRASOUND BEAMFORMINGBACKGROUND[0001] Aspects of the disclosure relate to gesture detection. More specifically, aspects of the disclosure relate to multi-touch gesture detection using ultrasound beamforming.[0002] Modern touch screen devices allow for user control using simple or multi- touch gestures by touching the screen with one or more fingers. Some touchscreen devices may also detect objects such as a stylus or ordinary or specially coated gloves. The touchscreen enables the user to interact directly with what is displayed. Recently, display devices that may include touch-screen features have become larger in size. For example, the average television size is quickly approaching 40 diagonal inches. The cost of including touch-screen functionality in these larger displays is cost prohibitive. Additionally, the large size of the touch-screens requires increased extremity movement by the user, resulting in a diminished user experience. Current solutions exist in the form of traditional touch-screens, infrared (IR) led based touch frames, and dual IR camera touch solutions. However, all of these solutions require a dedicated product for different touch sizes.[0003] Accordingly, a need exists for cost-effective and user friendly method for controlling larger display devices using simple or multi-touch gestures.BRIEF SUMMARY[0004] Certain embodiments describe a portable device capable of outputting ultrasound via beamforming along a surface for multi-touch gesture recognition.[0005] In some embodiments, a method for gesture detection includes projecting an ultrasound wave parallel to a surface, wherein the ultrasound wave is projected utilizing ultrasound beamforming. The method further includes receiving an ultrasound echo from an object in contact with the surface. The method also includes interpreting a gesture based at least in part on the received ultrasound echo. [0006] In some embodiments, the method further includes converting the interpreted gesture into a digital image, wherein the digital image is a representation of the interpreted gesture.[0007] In some embodiments, the method further includes executing an instruction based at least in part on the interpreting step.[0008] In some embodiments, the object includes a user extremity.[0009] In some embodiments, the projecting further includes creating a 2-D gesture scanning area on the surface.[0010] In some embodiments, the 2-D gesture scanning area is defined based at least in part on a frequency or strength of the projected ultrasound wave.[0011] In some embodiments, the projecting further comprises projecting the ultrasound wave parallel to the surface at a distance of 5mm or less.[0012] In some embodiments, an apparatus for gesture detection includes an ultrasound transducer array configured to project an ultrasound wave parallel to a surface, wherein the ultrasound wave is projected utilizing ultrasound beamforming. The ultrasound transducers are also configured to receive an ultrasound echo from an object in contact with the surface. The apparatus also includes a processor coupled to the ultrasound transducer configured to interpret a gesture based at least in part on the received ultrasound echo.[0013] In some embodiments, an apparatus for gesture detection includes means for projecting an ultrasound wave parallel to a surface, wherein the ultrasound wave is projected utilizing ultrasound beamforming. The apparatus further includes means for receiving an ultrasound echo from an object in contact with the surface. The apparatus also includes means for interpreting a gesture based at least in part on the received ultrasound echo.[0014] In some embodiments, a processor-readable medium includes processor readable instructions configured to cause a processor to project an ultrasound wave parallel to a surface, wherein the ultrasound wave is projected utilizing ultrasound beamforming. The processor readable instructions are further configured to cause the processor to receive an ultrasound echo from an object in contact with the surface. The processor readable instructions are also configured to cause the processor to interpret a gesture based at least in part on the received ultrasound echo.BRIEF DESCRIPTION OF THE DRAWINGS[0015] Aspects of the disclosure are illustrated by way of example. In the accompanying figures, like reference numbers indicate similar elements, and:[0016] FIG. 1 illustrates a simplified block diagram of an ultrasound beamforming device that may incorporate one or more embodiments;[0017] FIG. 2A illustrates a gesture environment including an external system coupled to an ultrasound beamforming device, in accordance with some embodiments;[0018] FIG. 2B illustrates performing a multi-touch gesture in a gesture environment, in accordance with some embodiments;[0019] FIG. 3 illustrates one embodiment of the ultrasound beamforming device, in accordance with some embodiments;[0020] FIG. 4 illustrates projection of ultrasound waves along a whiteboard, in accordance with some embodiments;[0021] FIG. 5 is an illustrative flow chart depicting an exemplary operation for multi-touch gesture detection using ultrasound beamforming; and[0022] FIG. 6 illustrates an example of a computing system in which one or more embodiments may be implemented.DETAILED DESCRIPTION[0023] Several illustrative embodiments will now be described with respect to the accompanying drawings, which form a part hereof. While particular embodiments, in which one or more aspects of the disclosure may be implemented, are described below, other embodiments may be used and various modifications may be made without departing from the scope of the disclosure or the spirit of the appended claims.[0024] In accordance with present embodiments, a small, portable, and scalable device capable of ultrasound beamforming may project an ultrasound beam parallel to a surface. In effect, this functionality may virtually convert a flat surface (e.g. tabletop, etc.) to a multi-touch surface capable of functioning as a user input device. The size of the multi-touch surface may be adjustable based on the needs of the application. The ultrasound beamforming technique used by the device may be similar to ultrasound B- mode equipment often used in medical applications (e.g., sonograms). The device may include an ultrasound transducer array operable to transmit and receive ultrasound waves, analog-to-digital converter (ADC) channels to digitize received ultrasound signals, a beamer to control transmission timing of the ultrasound beams, and a beamformer to reconstruct received ultrasound beams.[0025] In some embodiments, the device may be as small as a typical match box. In other embodiments, the device may be built into a mobile device, e.g. a smartphone. As such, the minimal size and weight of the device offers advantages over current solutions. The device may project an ultrasound beam onto a surface and detect differences in the projected beam to determine whether a user has initiated a touch with the surface. The user may touch the surface using any user extremity. The projected beam may vary in size depending on the application and the size of the beam may further be fine-tuned based on the wave frequency and strength of the projected ultrasound beam. Further, the beam may be of a lower resolution than those used in medical applications, allowing for lower cost applications and/or faster processing time.[0026] A method and apparatus for multi-touch gesture detection using ultrasound beamforming are disclosed. In the following description, numerous specific details are set forth such as examples of specific components, circuits, and processes to provide a thorough understanding of the present disclosure. Also, in the following description and for purposes of explanation, specific nomenclature is set forth to provide a thorough understanding of the present embodiments. However, it will be apparent to one skilled in the art that these specific details may not be required to practice the present embodiments. In other instances, well-known circuits and devices are shown in block diagram form to avoid obscuring the present disclosure. The term "coupled" as used herein means connected directly to or connected through one or more intervening components of circuits. Any of the signals provided over various buses described herein may be time-multiplexed with other signals and provided over one or more common buses. Additionally, the interconnection between circuit elements or software blocks may be shown as buses or as single signal lines. Each of the buses may alternatively be a single signal line, and each of the single signal lines may alternatively be buses, and a single line or bus might represent any one or more of myriad physical or logical mechanisms for communication between components. The present embodiments are not to be construed as limited to specific examples described herein but rather to include within their scopes all embodiments defined by the appended claims.[0027] FIG. 1 illustrates a simplified block diagram of an ultrasound beamforming device 100 that may incorporate one or more embodiments. Ultrasound beamforming device 100 includes a processor 1 10, display 130, input device 140, speaker 150, memory 160, ADC 120, DAC 121, beamformer 180, beamer 181, ultrasound transducer 170, and computer-readable medium 190.[0028] Processor 110 may be any general-purpose processor operable to carry out instructions on the ultrasound beamforming device 100. The processor 110 is coupled to other units of the ultrasound beamforming device 100 including display 130, input device 140, speaker 150, memory 160, ADC 120, DAC 121, beamformer 180, beamer 181, ultrasound transducer 170, and computer-readable medium 190.[0029] Display 130 may be any device that displays information to a user. Examples may include an LCD screen, CRT monitor, or seven-segment display.[0030] Input device 140 may be any device that accepts input from a user. Examples may include a keyboard, keypad, mouse, or touch input.[0031] Speaker 150 may be any device that outputs sound to a user. Examples may include a built-in speaker or any other device that produces sound in response to an electrical audio signal.[0032] Memory 160 may be any magnetic, electronic, or optical memory. Memory 160 includes two memory modules, module 1 162 and module 2 164. It can be appreciated that memory 160 may include any number of memory modules. An example of memory 160 may be dynamic random access memory (DRAM).[0033] Computer-readable medium 190 may be any magnetic, electronic, optical, or other computer-readable storage medium. Computer-readable storage medium 190 includes ultrasound transmission module 192, echo detection module 194, gesture interpretation module 196, command execution module 198, and image conversion module 199. [0034] DAC 121 is configured to convert a digital number representing amplitude to a continuous physical quantity. More specifically, in the present example, DAC 121 is configured to convert digital representations of ultrasound signals to an analog quantity prior to transmission of the ultrasound signals. DAC 121 may perform conversion of a digital quantity, prior to transmission by the ultrasound transducers 170 (described below).[0035] Ultrasound transducer 170 is configured to convert voltage into ultrasound, or sound waves about the normal range of human hearing. Ultrasound transducer 170 may also convert ultrasound to voltage. The ultrasound transducer 170 may include a plurality of transducers that include piezoelectric crystals having the property of changing size when a voltage is applied, thus applying an alternating current across them causing them to oscillate at very high frequencies, thus producing very high frequency sound waves. The ultrasound transducers 170 may be arranged in an array. The array may be arranged in such a way that ultrasound waves transmitted therefrom experience constructive interference at particular angles while others experience destructive interference.[0036] Ultrasound transmission module 192 is configured to regulate ultrasound transmissions on the device 100. The ultrasound transmission module 192 may interface with the ultrasound transducers 170 and place the ultrasound transmission module 192 in a transmit mode or a receive mode. In the transmit mode, the ultrasound transducers 170 may transmit ultrasound waves. In the receive mode, the ultrasound transducers 170 may receive ultrasound echoes. The ultrasound transmission module 192 may change the ultrasound transducer 170 between the receive and transmit modes on the fly. The ultrasound transducer 170 may also pass feedback voltages from ultrasound echoes to an ADC (described below).[0037] Beamer 181 is configured to directionally transmit ultrasound waves. In some embodiments, the beamer 180 may be coupled to the array of ultrasound transducers 170. The beamer may also be communicatively coupled the ultrasound transmission module 192. The beamer 181 may generate control timings of the ultrasound transducers 170. That is, each of the ultrasound transducers' 170 trigger timing may be controlled by the beamer 181. The beamer may also control the transmission strength of the output from each ultrasound transducer 170. Based on the timing of each ultrasound transducer 170, the ultrasound wave transmitted may form a sound "beam" having a controlled direction. To change the directionality of the array of ultrasound transducers 170 when transmitting, the beamer 181 controls the phase and relative amplitude of the signal at each transducer 170, in order to create a pattern of constructive and destructive interference in the wavefront. Beamer 181 may transmit the waves, via ultrasound transducers 170, along or parallel to a surface (e.g., tabletop) and may contain logic for surface detection. The Beamer 181 may also include capability to modify the ultrasound waves. For example, if the wavelength or strength of the ultrasound waves needs to be modified, the beamer 181 may include logic to control the ultrasound transducers 170.[0038] ADC 120 is configured to convert a continuous physical quantity to a digital number that represents the quantity's amplitude. More specifically, in the present example, the ADC 120 is configured to convert received ultrasound echoes into a digital representation. The digital representation may then be used for the gesture recognition techniques described herein.[0039] The beamformer 180 is configured to process received ultrasound echoes from ultrasound waves reflected off of an object. The beamformer may analyze the ultrasound echoes, after conversion to a digital representation by the ADC 120. Here, information from the different transducers in the array is combined in a way where the expected pattern of ultrasound echoes is preferentially observed. The beamformer 180 may reconstruct the digital representation of the ultrasound echoes to a strength/frequency 1-D array. A combination of multiple 1-D arrays may be used to generate a 2D-array to be processed by the device 100.[0040] Echo detection module 194 is configured to detect an ultrasound echo. The ultrasound echo may be generated by reflection off an object that comes into the beam of the ultrasound waves generated by the ultrasound transmission module 192. The object may be a user extremity such as a finger or an arm. The echo detection module 194 may interface with the ADC 120 to convert the received ultrasound wave echoes into a digital representation, as described above. Echo detection module 194 may also filter out irrelevant received ultrasound echoes.[0041] The gesture interpretation module 196 is configured to interpret a gesture from the received ultrasound echo detected by the echo detection module 194. Based on the ultrasound echoes that the echo detection modules 194 receives, and in turn the ADC 120 converts the ultrasound echoes to a digital representation, the gesture interpretation module 196 may reproduce a gesture performed by the user. For example, if a user performs a "swipe" gesture with their index finger, the gesture interpretation module 196 may reproduce and interpret the swipe based on the digital representation of the ultrasound echoes.[0042] The command execution module 198 is configured to execute a command on a system based on the gesture interpreted by gesture interpretation module 196. In some embodiments, the device 100 may be coupled to an external system for purposes of translating user input (accomplished by performing gestures) on the surface to execute a command on an external system. The external system may be, for example, a television set, gaming console, computer system, or any other system capable of receiving user input. In one non-limiting example, a user may perform a "swipe" over the virtual gesture surface created by the ultrasound beamforming device 100. Once the "swipe" gesture is recognized and interpreted by the gesture interpretation module, the command execution module 198 may translate the recognized and interpreted swipe into a native command for the external system. For example, if a user were to "swipe" from left to right, the command execution module 198 may translate the gesture into a next page command for web-browser application within a computing system. In some embodiments, the command execution module 198 may interface with a database (not shown) to retrieve an available list of commands native to the external system.[0043] Image conversion module 199 is configured to convert a series of gestures into a digital file format. The digital file format may be, for example, Portable Document Format (PDF), JPEG, PNG, etc. The memory 160 within ultrasound beamforming device 100 may be used to store the series of gestures prior to conversion into the digital file format.[0044] FIG. 2A illustrates a gesture environment 200 including an external system 210 coupled to an ultrasound beamforming device 100. In this particular example, the external system 210 is a television or other display device. The ultrasound beamforming device 100 may be coupled to external system 210 by either a wired or wireless connection. Some examples of wired connections include, but are not limited to, Universal Serial Bus (USB), FireWire, Thunderbolt, etc. Some examples of wireless connections include, but not are not limited to, Wi-Fi, Bluetooth, RF, etc. FIG. 2A also includes a surface 220. Surface 220 may be any flat surface including, but not limited to, a tabletop, countertop, floor, wall, etc. Surface 220 may also include surfaces of movable objects such as magazines, notepads, or any other movable object having a flat surface.[0045] As described above, ultrasound beamforming device 100 is configured to project ultrasound waves 240 and receive ultrasound echoes 250. The ultrasound echoes 250 may be reflected off an object, such as a user extremity. In this example, the user extremity is a user's hand 260. Specifically, the ultrasound echoes 250 reflect off of the index finger 262 of the user's hand 260. The ultrasound echoes 250 may be detected by the ultrasound beamforming device 100 using the echo detection module 194, as described above.[0046] The ultrasound beamforming device 100 may be configured to create a virtual gesture surface 230 by projecting the ultrasound waves 240 along or parallel to the surface 220. The virtual gesture surface 230 may be formed on the entire surface 220 or within a specific area of the surface 220 depending on the manner in which the ultrasound waves are projected. In some embodiments, the ultrasound beamforming device 100 may project the ultrasound waves 240 using beamforming techniques. Such a technique may allow the ultrasound beamforming device 100 to control the direction of the ultrasound waves 240 projected toward the surface 220. In some embodiments, the ultrasound beamforming device 100 may include logic to automatically detect a surface 220 and project the ultrasound waves 240 towards the surface without any manual calibration. The ultrasound beamforming device 100 may project the ultrasound waves using ultrasound transmission module 192, ultrasound transducer 170, and beamer 181, as described above. In some embodiments, the difference in distance between the projected ultrasound waves 240 and the surface 220 may be 5mm or less.[0047] As described above, the ultrasound beamforming device 100 may recognize and interpret a gesture performed by a user extremity. For example, the ultrasound beamforming device 100 may recognize and interpret a gesture performed by the finger 262 of the user's hand 260. The recognizing and interpreting may be accomplished using the gesture interpretation module 196, as described above. The gesture interpretation module 196 may determine differences in time between when an ultrasound wave 240 was projected along the surface 220 and when an ultrasound echo 250 was received by the ultrasound beamforming device 100. From the determined difference in time, the distance of the user's finger 262 from the ultrasound beamforming 100 device may be determined. Additionally, the angle and direction of the ultrasound echo 250 may also be determined by the gesture interpretation module 196.[0048] In some embodiments, the ultrasound waves 240 are short-timed pulses travelling away from the ultrasound transducers 170 along the beam direction. When the ultrasound waves 240 come into contact with an object, ultrasound echoes will bounce back and travel towards the ultrasound transducers 170. Some of the energy from the ultrasound waves 240 pass through the object and continue on their path. When those ultrasound waves 240 come into contact with another object, more ultrasound echoes will bounce back and travel towards the ultrasound transducers 170. Accordingly, by measuring the time between the transmission of the ultrasound waves and the received ultrasound echo of the ultrasound echoes, the distance from the device 100 to the object may be calculated. More ultrasound waves 240 may be transmitted in another direction (typically a few degrees from the last transmission) and further ultrasound echoes are received from these ultrasound waves 240. In some embodiments, hundreds of ultrasound waves may be transmitted and hundreds of ultrasound echoes may be received, which may eventually form a 2-D scanning area. In some embodiments, multiple ultrasound waves 240 may be transmitted in different directions simultaneously to speed up the scanning rate.[0049] Once the gesture is recognized and interpreted by the gesture interpretation module 196, the ultrasound beamforming device 100 may relay a command for execution to the external system 210. The command may be based on the recognized and interpreted gesture. For example, if the recognized gesture is the finger 262 swiping in a left-to-right motion on the virtual gesture surface 230, the command may be for the external system 210 to flip to a next page within a user interface. In some embodiments, the gesture environment 200 may include a command database 270. The command database 270 may store a plurality of command mappings that map a gesture to a command native to the external system 210. Upon recognizing and interpreting a gesture, the ultrasound beamforming device 100 may query the command database 270 with the recognized and interpreted gesture in order to determine a command native to the external system 210 that is represented by the gesture. In some embodiments, the native command may be relayed from the ultrasound beamforming device 100 to the external system 210 using one of the above mentioned wired or wireless connections.[0050] It can be appreciated that while one finger 262 is shown performing a gesture on the virtual gesture surface 230, any number of fingers or other user extremities may be used to perform a gesture on the virtual gesture surface 230. This multi-touch functionality may be operable to execute a wide array of commands on the external system 210.[0051] FIG. 2B illustrates performing a multi -touch gesture in a gesture environment 200. The gesture environment includes an external system 210 coupled to an ultrasound beamforming device 100. FIG. 2B is similar to FIG. 2A except that the user's hand 260 is performing a multi-touch "pinching" gesture with his/her fingers 262. The pinching gesture may involve the user bringing his/her two fingers 262 together on the virtual gesture surface 230. The pinching gesture may represent a user command for zooming of content on the external system 210.[0052] At a first time, the device 100 may project a series of ultrasound waves 240 toward the user's fingers 262. As the user performs the pinching motion 280 with his/her fingers, the device 100 may continue to project more ultrasound waves 240 while simultaneously receiving ultrasound echoes 250 reflected off the user's fingers 262. From analyzing the received ultrasound echoes 250, as described above, the device may recognize the entire pinching motion 280 from the user's fingers 262.[0053] Once the gesture is recognized and interpreted by the gesture interpretation module 196, the ultrasound beamforming device 100 may relay a command for execution to the external system 210. The command may be based on the recognized and interpreted gesture from the pinching motion 280.[0054] FIG. 3 illustrates one embodiment of the ultrasound beamforming device 100, in accordance with some embodiments. As described with reference to FIG. 1, the ultrasound beamforming device 100 includes a beamformer 180, beamer 181, one or more analog-to-digital converters 120, an ultrasound transmission module 192, and one or more ultrasound transducers 170. [0055] The ultrasound beamforming device is configured to send ultrasound waves 240 and receive ultrasound echoes 250. The ultrasound echoes 250 may be a reflection of an ultrasound wave off an object. In some embodiments, the object may be a user extremity. The plurality of ultrasound waves 240 are projected by the ultrasound transducers 170 of the ultrasound beamforming device 100. The arrangement of the ultrasound transducers 170 may determine in part the angle, frequency, and strength of the ultrasound waves 240. In some embodiments, the ultrasound waves 240 are projected along a surface 220.[0056] The plurality of ultrasound waves 240 may form a "virtual" gesture surface 230 over the surface 220 wherein a user may perform gestures using, for example, a user extremity. In some embodiments, the ultrasound waves 240 may be at a distance of 5mm or less from the surface.[0057] As described above, ultrasound transmission module 192 is configured to transmit ultrasound waves via the ultrasound transducer arrays 170. The ultrasound transducer arrays 170 may also receive ultrasound echoes 250. The ultrasound transmission module 192 may also be coupled to the one or more ADCs 120, which in turn are coupled to beamformer 180. The one or more ADCs 120 may take a received ultrasound echo 250 and convert an analog signal representation of the received echo 250 to a digital representation. The ADCs may be coupled to beamformer 180 wherein the beamformer 180 may be configured to receive the digital representation of the received ultrasound echo 250 from the one or more ADCs 120. When receiving, information from the different transducers 170 in the array is combined in a way where the expected pattern of ultrasound waves is preferentially observed.[0058] The ultrasound waves may be transmitted using the beamer 181 as described above. The ultrasound transmission module 192 may transmit the waves along a surface (e.g., tabletop) and may contain logic for surface detection. The beamer 181 may also include capability to modify the ultrasound waves transmitted via the ultrasound transducers 170. For example, if the wavelength or strength of the ultrasound waves needs to be modified, the beamer 181 may include logic to control the behavior of the ultrasound transducers 170.[0059] In some embodiments, the ultrasound waves 240 may be projected along the surface 220 such that the virtual gesture surface 230 is created by a "sweeping scan" of the ultrasound waves 240. That is, each ultrasound transducer 170 may project an ultrasound wave 240 in a one-by-one sequence. In other words, the array of ultrasound transducers 170 is configured with a certain timing to trigger each ultrasound transducer 170 and to project an ultrasound wave (beam) with a controlled direction. As mentioned above, the beamer 181 may control the timing of the ultrasound transducers 170. As such, the ultrasound waves 240 may effectively scan across the surface 220 to detect a gesture input by a user.[0060] FIG. 4 illustrates projection of ultrasound waves 240 along a whiteboard 410, in accordance with some embodiments. As described above, image conversion module 199 is configured to convert a series of gestures into a digital file format. The digital file format may be, for example, Portable Document Format (PDF), JPEG, PNG, etc. The memory 160 within ultrasound beamforming device 100 may be used to store the series of gestures prior to conversion into the digital file format.[0061] The ultrasound beamforming device 100 may project a number of ultrasound waves 240 along the whiteboard 410. In some embodiments, the ultrasound beamforming device 100 may be positioned above the whiteboard 410 such that the ultrasound waves 240 may be projected downward along the surface of the whiteboard 410. However, it can be appreciated that the ultrasound beamforming device 100 may be placed in any position relative to the whiteboard 410.[0062] The ultrasound waves 240 may reflect off of an object along the whiteboard 410 and reflect ultrasound echoes 250 back toward the ultrasound beamforming device 100. In some embodiments, the object may be a user extremity holding a writing instrument. The user may draw characters on the whiteboard 410 with the writing instrument and the ultrasound echoes 250 (that are a reflection off the user extremity or writing instrument) that return to the ultrasound beamforming device 100 may indicate, using the methods described above, hand motions or writing instrument motions performed by the user. When the user lifts the writing instrument off the whiteboard 410, the ultrasound waves 240 will not be blocked by any object indicating that the user is not in the process of drawing any characters on the whiteboard 410. In some embodiments, the ultrasound beamforming device 100 may store the series of determined user motions into memory 160 local to the ultrasound beamforming device 100. [0063] The stored series of determined user motions may be converted to a digital file format similar to the ones given as examples above. In some embodiments, the series of determined user motions may be converted to a digital file format "on-the-fly" without storing the detected user motions in memory 160.[0064] For example, in FIG. 4, a user may draw the text "The quick brown fox jumps over the lazy dog" on the whiteboard 410 using a pen. The ultrasound beamforming device 100 may scan the surface of the whiteboard 410 with ultrasound waves 240 as described above. Any ultrasound waves 240 coming into contact with the user's hand or the pen may reflect an ultrasound echo 250 to the ultrasound beamforming device 100. The ultrasound beamforming device 100 may record the received ultrasound echoes 250 and determine the drawing strokes performed by the user on the whiteboard 410 therefrom. The ultrasound beamforming device 100 may store the determined drawing strokes, which represent "The quick brown fox jumps over the lazy dog" into memory 160. The drawing strokes may then be converted into a digital file format, such as a PDF file.[0065] It can be appreciated that a plurality of writing instruments may also be used by the user to draw on the whiteboard 410. In some embodiments, a user may also use any other object to perform drawing motions on the whiteboard 410 without actually transferring any kind of ink to the whiteboard. For example, a user may use stylus or other object to outline a drawing on the whiteboard 410. The motion of the user's strokes may be captured by the ultrasound beamforming device 100 and converted to a digital format.[0066] FIG. 5 is an illustrative flow chart 500 depicting an exemplary operation for multi-touch gesture detection using ultrasound beamforming. In block 502, an ultrasound wave is projected parallel to a surface, wherein the ultrasound wave is projected utilizing ultrasound beamforming. In some embodiments, the projecting further includes creating a 2-D gesture scanning area on the surface. The 2-D gesture scanning area may be defined based at least in part on a frequency of the ultrasound wave. In some embodiments, the ultrasound waves are projected at a distance of 5mm or less along the surface.[0067] For example, in FIG. 2A, the ultrasound beamforming device projects a plurality of ultrasound waves parallel to the surface. The projected ultrasound waves create a virtual gesture surface, e.g. 2-D gesture scanning area, on the surface. The virtual gesture surface may be used by a user to perform gesture input to an external system.[0068] In block 504, an ultrasound echo is received from an object in contact with the surface. In some embodiments, the object may include a user extremity, for example, a hand or an arm. For example, in FIG. 2A, a user's finger on the user's hand is in contact with the virtual gesture surface. The ultrasound waves projected along the surface may come in contact with the user's finger and reflect ultrasound echoes back toward the ultrasound beamforming device.[0069] In block 506, a gesture is interpreted based at least in part on the received ultrasound echo. The gesture may be interpreted to determine a command to relay to an external system. The command may be determined by querying a command database including a mapping of gestures to commands native to the external system. For example, in FIG. 2A, the ultrasound beamforming device may interpret a gesture performed by the user's finger based on the received ultrasound echoes. The ultrasound beamforming device may then query the command database with the interpreted gesture to determine a command associated with the gesture. The command may then be relayed to the external system for execution.[0070] In some embodiments, the method also includes converting the interpreted gesture into a digital image, wherein the digital image is a representation of the interpreted gesture. For example, a user may perform gestures in the manner of drawing on a whiteboard. The ultrasound beamforming device may record the gesture movements based on the received ultrasound echoes and store them into memory. The recorded gesture movements in memory may then be converted into a digital file format, e.g. PDF file, representing the gesture movements.[0071] FIG. 6 illustrates an example of a computing system in which one or more embodiments may be implemented. A computer system as illustrated in FIG. 6 may be incorporated as part of the above described computerized device. For example, computer system 600 can represent some of the components of a television, a computing device, a server, a desktop, a workstation, a control or interaction system in an automobile, a tablet, a netbook or any other suitable computing system. A computing device may be any computing device with an image capture device or input sensory unit and a user output device. An image capture device or input sensory unit may be a camera device. A user output device may be a display unit. Examples of a computing device include but are not limited to video game consoles, tablets, smart phones and any other hand-held devices. FIG. 6 provides a schematic illustration of one embodiment of a computer system 600 that can perform the methods provided by various other embodiments, as described herein, and/or can function as the host computer system, a remote kiosk/terminal, a point-of-sale device, a telephonic or navigation or multimedia interface in an automobile, a computing device, a set-top box, a table computer and/or a computer system. FIG. 6 is meant only to provide a generalized illustration of various components, any or all of which may be utilized as appropriate. FIG. 6, therefore, broadly illustrates how individual system elements may be implemented in a relatively separated or relatively more integrated manner. In some embodiments, computer system 600 may implement functionality of external system 210 in FIG. 2A.[0072] The computer system 600 is shown comprising hardware elements that can be electrically coupled via a bus 602 (or may otherwise be in communication, as appropriate). The hardware elements may include one or more processors 604, including without limitation one or more general-purpose processors and/or one or more special-purpose processors (such as digital signal processing chips, graphics acceleration processors, and/or the like); one or more input devices 608, which can include without limitation one or more cameras, sensors, a mouse, a keyboard, a microphone configured to detect ultrasound or other sounds, and/or the like; and one or more output devices 610, which can include without limitation a display unit such as the device used in embodiments of the invention, a printer and/or the like.[0073] In some implementations of the embodiments of the invention, various input devices 608 and output devices 610 may be embedded into interfaces such as display devices, tables, floors, walls, and window screens. Furthermore, input devices 608 and output devices 610 coupled to the processors may form multi-dimensional tracking systems.[0074] The computer system 600 may further include (and/or be in communication with) one or more non-transitory storage devices 606, which can comprise, without limitation, local and/or network accessible storage, and/or can include, without limitation, a disk drive, a drive array, an optical storage device, a solid-state storage device such as a random access memory ("RAM") and/or a read-only memory ("ROM"), which can be programmable, flash-updateable and/or the like. Such storage devices may be configured to implement any appropriate data storage, including without limitation, various file systems, database structures, and/or the like.[0075] The computer system 600 might also include a communications subsystem 612, which can include without limitation a modem, a network card (wireless or wired), an infrared communication device, a wireless communication device and/or chipset (such as a Bluetooth™ device, an 802.1 1 device, a Wi-Fi device, a WiMax device, cellular communication facilities, etc.), and/or the like. The communications subsystem 612 may permit data to be exchanged with a network, other computer systems, and/or any other devices described herein. In many embodiments, the computer system 600 will further comprise a non-transitory working memory 618, which can include a RAM or ROM device, as described above.[0076] The computer system 600 also can comprise software elements, shown as being currently located within the working memory 618, including an operating system 614, device drivers, executable libraries, and/or other code, such as one or more application programs 616, which may comprise computer programs provided by various embodiments, and/or may be designed to implement methods, and/or configure systems, provided by other embodiments, as described herein. Merely by way of example, one or more procedures described with respect to the method(s) discussed above might be implemented as code and/or instructions executable by a computer (and/or a processor within a computer); in an aspect, then, such code and/or instructions can be used to configure and/or adapt a general purpose computer (or other device) to perform one or more operations in accordance with the described methods.[0077] A set of these instructions and/or code might be stored on a computer- readable storage medium, such as the storage device(s) 606 described above. In some cases, the storage medium might be incorporated within a computer system, such as computer system 600. In other embodiments, the storage medium might be separate from a computer system (e.g., a removable medium, such as a compact disc), and/or provided in an installation package, such that the storage medium can be used to program, configure and/or adapt a general purpose computer with the instructions/code stored thereon. These instructions might take the form of executable code, which is executable by the computer system 600 and/or might take the form of source and/or installable code, which, upon compilation and/or installation on the computer system 600 (e.g., using any of a variety of generally available compilers, installation programs, compression/decompression utilities, etc.) then takes the form of executable code.[0078] Substantial variations may be made in accordance with specific requirements. For example, customized hardware might also be used, and/or particular elements might be implemented in hardware, software (including portable software, such as applets, etc.), or both. Further, connection to other computing devices such as network input/output devices may be employed. In some embodiments, one or more elements of the computer system 600 may be omitted or may be implemented separate from the illustrated system. For example, the processor 604 and/or other elements may be implemented separate from the input device 608. In one embodiment, the processor is configured to receive images from one or more cameras that are separately implemented. In some embodiments, elements in addition to those illustrated in FIG. 6 may be included in the computer system 600.[0079] Some embodiments may employ a computer system (such as the computer system 600) to perform methods in accordance with the disclosure. For example, some or all of the procedures of the described methods may be performed by the computer system 600 in response to processor 604 executing one or more sequences of one or more instructions (which might be incorporated into the operating system 614 and/or other code, such as an application program 616) contained in the working memory 618. Such instructions may be read into the working memory 618 from another computer- readable medium, such as one or more of the storage device(s) 606. Merely by way of example, execution of the sequences of instructions contained in the working memory 618 might cause the processor(s) 604 to perform one or more procedures of the methods described herein.[0080] The terms "machine-readable medium" and "computer-readable medium," as used herein, refer to any medium that participates in providing data that causes a machine to operate in a specific fashion. In some embodiments implemented using the computer system 600, various computer-readable media might be involved in providing instructions/code to processor(s) 604 for execution and/or might be used to store and/or carry such instructions/code (e.g., as signals). In many implementations, a computer- readable medium is a physical and/or tangible storage medium. Such a medium may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media. Non-volatile media include, for example, optical and/or magnetic disks, such as the storage device(s) 606. Volatile media include, without limitation, dynamic memory, such as the working memory 618. Transmission media include, without limitation, coaxial cables, copper wire and fiber optics, including the wires that comprise the bus 602, as well as the various components of the communications subsystem 612 (and/or the media by which the communications subsystem 612 provides communication with other devices). Hence, transmission media can also take the form of waves (including without limitation radio, acoustic and/or light waves, such as those generated during radio-wave and infrared data communications).[0081] Common forms of physical and/or tangible computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, punchcards, papertape, any other physical medium with patterns of holes, a RAM, a PROM, EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read instructions and/or code.[0082] Various forms of computer-readable media may be involved in carrying one or more sequences of one or more instructions to the processor(s) 604 for execution. Merely by way of example, the instructions may initially be carried on a magnetic disk and/or optical disc of a remote computer. A remote computer might load the instructions into its dynamic memory and send the instructions as signals over a transmission medium to be received and/or executed by the computer system 600. These signals, which might be in the form of electromagnetic signals, acoustic signals, optical signals and/or the like, are all examples of carrier waves on which instructions can be encoded, in accordance with various embodiments of the invention.[0083] The communications subsystem 612 (and/or components thereof) generally will receive the signals, and the bus 602 then might carry the signals (and/or the data, instructions, etc. carried by the signals) to the working memory 618, from which the processor(s) 604 retrieves and executes the instructions. The instructions received by the working memory 618 may optionally be stored on a non-transitory storage device 606 either before or after execution by the processor(s) 604.[0084] The methods, systems, and devices discussed above are examples. Various embodiments may omit, substitute, or add various procedures or components as appropriate. For instance, in alternative configurations, the methods described may be performed in an order different from that described, and/or various stages may be added, omitted, and/or combined. Also, features described with respect to certain embodiments may be combined in various other embodiments. Different aspects and elements of the embodiments may be combined in a similar manner. Also, technology evolves and, thus, many of the elements are examples that do not limit the scope of the disclosure to those specific examples.[0085] Specific details are given in the description to provide a thorough understanding of the embodiments. However, embodiments may be practiced without these specific details. For example, well-known circuits, processes, algorithms, structures, and techniques have been shown without unnecessary detail in order to avoid obscuring the embodiments. This description provides example embodiments only, and is not intended to limit the scope, applicability, or configuration of the invention. Rather, the preceding description of the embodiments will provide those skilled in the art with an enabling description for implementing embodiments of the invention. Various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the invention.[0086] Also, some embodiments are described as processes depicted as flow diagrams or block diagrams. Although each may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be rearranged. A process may have additional steps not included in the figures. Furthermore, embodiments of the methods may be implemented by hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware, or microcode, the program code or code segments to perform the associated tasks may be stored in a computer-readable medium such as a storage medium. Processors may perform the associated tasks. Thus, in the description above, functions or methods that are described as being performed by the computer system may be performed by a processor— for example, the processor 604— configured to perform the functions or methods. Further, such functions or methods may be performed by a processor executing instructions stored on one or more computer readable media.[0087] Having described several embodiments, various modifications, alternative constructions, and equivalents may be used without departing from the spirit of the disclosure. For example, the above elements may merely be a component of a larger system, wherein other rules may take precedence over or otherwise modify the application of the invention. Also, a number of steps may be undertaken before, during, or after the above elements are considered. Accordingly, the above description does not limit the scope of the disclosure.[0088] Various examples have been described. These and other examples are within the scope of the following claims. |
PROBLEM TO BE SOLVED: To provide a method and device for disabling one or more cache portions during low voltage operations.SOLUTION: One or more extra bits may be used for a portion of a cache, which indicate whether the portion of the cache is operable at or below a Vccmin level. A replacement logic is provided which, in the case of transition to an ultralow power mode (ULPM) corresponding to an ultralow voltage level, flushes cache lines of all of ways not operable at the ultralow voltage level on the basis of one or more bits corresponding to each of a plurality of cache line groups. Access to a cache line group is detected to permit access to a first way on the basis of one or more disable bits corresponding to cache lines of the first way in the ULPM but not to permit access to cache lines of a second way on the basis of one or more disable bits corresponding to cache lines of the second way in a mode other than the ULPM. |
Logic for detecting an access to a cache and a portion of the cache and determining whether the portion of the cache is operable at an ultra-low voltage level based on one or more bits corresponding to the portion of the cache The ultra low voltage level is below a minimum voltage level corresponding to the voltage level at which all memory cells of the cache operate correctly.The apparatus of claim 1, wherein the portion of the cache comprises one or more cache lines or one or more subblocks of a plurality of cache lines.The method further comprises test logic for testing the portion of the cache to determine whether the portion of the cache is operable at the ultra-low voltage level, the test logic being either at manufacture or at power on. The apparatus according to claim 1, wherein the test is performed at a self test (POST).4. The apparatus of claim 3, further comprising: logic to update the one or more bits in response to test results generated by the test logic.The apparatus of claim 1, wherein the one or more bits include one or more redundant bits.The apparatus of claim 1, wherein the access to the portion of the cache results in a miss depending on the one or more bits, even if a hit with the corresponding tag.The apparatus of claim 1, wherein a given address of the portion of the cache is mapped to a different cache group each time.8. The apparatus of claim 7, further comprising a counter that maps the given address to a plurality of different cache groups.The apparatus of claim 1, wherein the cache comprises a level 1 cache, a mid level cache or a last level cache.The apparatus of claim 1, further comprising one or more processor cores, wherein at least one of the one or more processor cores comprises the cache.Receiving a request to access a portion of the cache, and determining whether the cache operates at an ultra-low voltage level below a minimum voltage level corresponding to a voltage at which all memory cells of the cache operate correctly. Determining, based on one or more bits corresponding to the portion of the cache, determining whether the portion of the cache is operational at the ultra-low voltage level.The method further comprises testing the portion of the cache to determine whether the portion of the cache is operable at the ultra low voltage level, the testing comprising manufacturing or power on self The method according to claim 11, performed at test (POST).The method of claim 12, further comprising: updating the one or more bits in response to the testing.The method of claim 11, wherein the portion of the cache comprises one or more cache lines or one or more subblocks of a plurality of cache lines.Receiving the request to enter a power mode corresponding to the ultra-low voltage level and flushing the portion of the cache in response to determining that the portion of the cache can not operate at the ultra-low voltage level The method of claim 11, further comprising the steps of:A memory for storing an instruction, and a processor core for executing the instruction, wherein the processor core detects an access to a part of the cache, and based on one or more bits corresponding to the part of the cache; Logic to determine whether said portion of cache is operable at an ultra low voltage level, said ultra low voltage level being the lowest voltage level corresponding to the voltage level at which all memory cells of said cache operate correctly A computing system that isThe computing system of claim 16, wherein the portion of the cache comprises one or more cache lines or one or more sub-blocks of a plurality of cache lines.The method further comprises test logic for testing the portion of the cache to determine whether the portion of the cache is operable at the ultra-low voltage level, the test logic being either at manufacture or at power on. The computing system according to claim 16, wherein the test is performed at a self test (POST).The computing system of claim 16, wherein the cache comprises a level 1 cache, a mid level cache, or a last level cache.The computing system of claim 16, further comprising an audio device coupled to the processor core. |
Disable cache part at low voltage operationThe present disclosure relates generally to electronic devices. More specifically, embodiments of the present invention relate to disabling one or more cache portions during low voltage operation.Today's mass-produced silicon can suffer from a large number of parameter variations due to manufacturing. This variation can cause problems in manufacturing various types of memory cells. This variation is the cause of a phenomenon known as Vccmin which governs the lowest voltage at which memory cells operate correctly. As conventional microprocessors include many structures implemented using various types of memory cells, such structures usually dominate the minimum voltage at which the entire microprocessor operates reliably. Because voltage scaling is effective to reduce the power consumption of microprocessors, certain designs may suffer from Vccmin when used at low voltages.This will be described in detail with reference to the accompanying drawings. In the figures, the largest digit of a reference number indicates the number of the drawing in which the reference number is first used. The use of the same reference symbols in different drawings indicates similar or identical items.FIG. 1 is a block diagram illustrating an embodiment of a computing system utilized to implement the various embodiments described herein.FIG. 5 illustrates an embodiment of a cache according to some embodiments.FIG. 5 illustrates an embodiment of a cache according to some embodiments.FIG. 7 illustrates a voltage sorting state diagram for disabled bit testing, according to some embodiments.FIG. 7 illustrates a voltage sorting state diagram for disabled bit testing, according to some embodiments.FIG. 6 is a schematic diagram for explaining a read operation in a cache according to the embodiment.FIG. 5 is a block diagram illustrating address remapping logic according to an embodiment.It is a flow chart for explaining the method concerning the embodiment of the present invention.FIG. 1 is a block diagram illustrating an embodiment of a computing system utilized to implement the various embodiments described herein.FIG. 1 is a block diagram illustrating an embodiment of a computing system utilized to implement the various embodiments described herein.In the following description, numerous specific details are set forth to provide a thorough description of various embodiments. However, various embodiments of the invention may be practiced without the use of such specific details. In addition, well-known methods, procedures, components and circuits have been omitted from the detailed description to avoid obscuring specific embodiments of the present invention. Furthermore, various aspects of embodiments of the present invention may include various means such as semiconductor integrated circuits (hardware), computer readable instructions (software) organized into one or more programs, or a combination of hardware and software. It can be realized using. In the present disclosure, the term "logic" shall mean hardware, software, or a combination thereof. Also, in some embodiments described herein, the set value is a logical 0 and the clear value is a logical 1, but may be reversed, for example, depending on the implementation.In some embodiments, one or more cache portions (cache lines or sub-blocks of cache lines) are disabled during low voltage operation. By solving the problem of Vccmin (described above), the memory device can operate at a Vccmin level or lower which reduces power consumption, for example, extending the battery life of portable computing devices. Also, according to some embodiments, performance loss may be mitigated by maintaining operation of memory cells in cache at a lower granularity than cache lines during low voltage operation. In addition, one embodiment of the present invention ensures, for example, that the memory cell retains the stored information over a period of time under conditions guaranteed by the reliability criteria that Intel (R) has demonstrated. , Maintain the memory cell voltage at a certain voltage level. In general, a memory cell is considered to operate reliably at a given voltage level once it passes a series of tests. In the test, the read function, the write function, and the hold function of the memory cell may be evaluated. For example, only cells that do not find errors during testing are considered to be reliable.According to an embodiment, for example, it may be determined that one or more cache lines do not function (or the operation is unreliable) at very low operating voltages (ULOVs) (eg, by bit values corresponding to one or more cache lines). These one or more cache lines may be disabled when operating on ULOV based on the The ULOV may be, for example, about 150 mV lower than the current low voltage level of about 750 mV (which may also be referred to herein as the "minimum voltage level"). According to one embodiment, the processor has flushed one or more cache lines that can not operate at ULOV (e.g., invalidated and / or another memory such as main memory if necessary). In response to the determination that the memory device has been written, it may transition to an ultra low power mode (ULPM) (eg, operating at ULOV).According to one embodiment, performance loss due to reduced cache size (as a result of disabling cache lines) may be mitigated, for example, in a high performance out of order processor. For example, a moderate percentage of bad bits may be acceptable given the relatively low cost to the performance, low complexity, and high predictability of the performance. Such a solution is considered to be effective below the Vccmin operating level while having no impact on performance when operating at high Vcc. According to one embodiment, for operation below Vccmin, disabling a bad sub-block with high granularity (eg 64 bits) will still utilize a cache line with one or a few bad sub-blocks. The performance overhead resulting from the cacheline disabling method may be reduced. In addition, the high performance predictability that is key to chip binning is such that programs with very few cache groups that dominate performance receive similar performance hits, regardless of the location of bad subblocks in the cache. It is realized by rotating the address mapping. Such techniques are believed to have little or no impact on performance when operating at high Vcc.According to the techniques described herein, the performance of various computing devices may be improved, including, for example, those described with reference to FIGS. More specifically, FIG. 1 is a block diagram illustrating a computing system 100 according to an embodiment of the present invention. System 100 may include one or more processors 102-1 to 102-N (collectively referred to herein as "processors 102"). Processors 102 may communicate via an interconnect network or bus 104. Each processor may have various components, and for the sake of clarity, some components will be described only for processor 102-1. Thus, the remaining processors 102-2 to 102-N may each have the same or similar components as those described for processor 102-1.According to an embodiment, processor 102-1 may include one or more processor cores 106-1 to 106-M (collectively referred to herein as "core 106"), shared cache 108, and / or router 110. Good. The processor core 106 may be implemented by one integrated circuit (IC) chip. The chip is further described with reference to one or more shared and / or private caches (eg, cache 108), a bus or interconnect (eg, bus or interconnect network 112), a memory controller (eg, FIGS. 6 and 7). Memory controller) or other components.According to one embodiment, router 110 may be utilized to cause processor 102-1 and / or various components of system 100 to communicate with one another. Further, processor 102-1 may include multiple routers 110. Also, multiple routers 110 may communicate with one another to enable data routing between various components internal or external to processor 102-1.Shared cache 108 may store one or more components of processor 102-1, eg, data (eg, including instructions) for use by core 106. For example, shared cache 108 may cache data locally so that components of processor 102 may more quickly access data stored in memory 114. According to an embodiment, cache 108 may be a mid level cache (eg, level 2 (L2), level 3 (L3), level 4 (L4), or other level cache), last level cache (LLC), and And / or combinations of these may be included. Also, various components of processor 102-1 may be in direct communication with shared cache 108 via a bus (eg, bus 112) and / or a memory controller or memory hub. As shown in FIG. 1, according to some embodiments, one or more of the cores 106 may be level 1 (L1) cache (116-1) (herein generically referred to as "L1 cache 116") and / or An L2 cache (not shown) may be included.2A and 2B illustrate embodiments of a cache according to some embodiments. According to some embodiments, the cache shown in FIGS. 2A and 2B may be utilized as the cache described with reference to other figures of the present application, such as FIG. 1, FIG. 6 or FIG. More specifically, in some embodiments, computing devices may utilize configurable caches. Configurable cache may be at the expense of capacity to allow low voltage operation.According to some embodiments, one or more of the three features described below may be utilized. First, additionally introduce a low power state (referred to herein as ULPM) that utilizes a voltage level called ULOV. According to one embodiment, ULOV is about 150 mV lower than the current value of Vccmin (assuming about 750 mV). Second, voltage sorting algorithms may be used to determine which cache lines work in ULOV. Third, each cache line group is associated with the disable bit or d bit. The voltage sorting algorithm sets the d bits associated with each of the cache line groups that are not fully functional at very low operating voltages.Also, ULPM may be considered as an extension of the current power state. For example, when the microprocessor enters ultra-low power mode, all cache lines that have the d bit set are flushed from the affected cache by going to a low voltage. Assuming that LLC, DCU (L1 data cache) and IFU (L1 instruction cache) operate at ULOV after migration, all DCU and ICU cache lines with d bit set are flushed (eg, required) (In the case of (1), it is invalidated and written back to memory 114). The LLC is then prepared for ULOV operation by flushing each cache line for which the d bit is set. When all cache lines for which the d bit is set are excluded from the system, the corresponding processor may shift to ULPM.The cache is usually organized into groups, each group consisting of multiple paths. Each path typically corresponds to one 32-64 byte cache line. When the processor presents an address to the cache, a cache lookup (search) is performed. The address can be divided into three parts: line offset, group selection, and tag. Consider a cache design having 1024 groups, each group having 8 paths, each path having one 64-byte line. The entire cache has a storage capacity of 512 KB (1024 × 8 × 64). If the cache is designed to handle 50 bit addresses, then the index may be allocated as follows. Bits 0-5 are line offsets that specify bytes within the 64 byte line. According to some embodiments, bits 0-5 may specify a leading byte. One reason for this is that some load / store instructions may access multiple bytes. For example, one byte (or two bytes or the like) starting from a designated byte or the like may be read. Bits 6-15 are a group selection that specifies the group that stores the line. The remaining bits (16-49) are stored as tags. All cache lines with the same group selection bit constitute any of the eight paths that the identified group has.According to an embodiment, the cache line group may be associated with d bits that specify whether the cache line group functions at low voltage. As shown in FIGS. 2A and 2B, the d bit is determined by the permutation logic 202 but has no effect except when the processor is ULPM or transitions to ULPM. Thus, logic 202 may detect accesses to one or more cache portions (e.g., cache lines) to determine whether the cache portion is operational below Vccmin. When transitioning to ULPM, all cache lines for which the d bit is set are flushed. This is to avoid losing data after moving to ULPM. In ULPM, the cache functions normally except that only the cache line associated with the d bit set to 0 is considered valid. When searching for a group based on an address in ULPM, the d bit avoids falsely matching a disabled line. In the embodiment described herein, the set value is 0 and the clear value is 1, but in some embodiments, it can be reversed. For example, the cleared d bits may indicate the disabling of one or more corresponding cache lines.Also, if a cache miss occurs, the replacement logic 202 selects a cache line to exclude from the cache. The selected cache line is overwritten with new data fetched from memory. In ULPM, the d bits are referenced by permutation logic 202 (FIG. 2B) to prevent assignment to disabled cache lines. A disabled cache line may be prevented from being allocated by being treated as MRU (Most Recently Used: the shortest period since the last use) by the replacement process. A vector replacement process based on such duration may be used, for example, in disabling each cache line. In this process, the bit vector (1 bit per cache line) is scanned to identify the first line to which 0 is given as LRU (Least Recently Used: the longest elapsed time since the last use). To replace. The cache line is always treated as an MRU by setting the associated bit to 1, and is not selected as a target for replacement.Regarding defects in d bits, in ULPM in which d bits affect the function of the cache, defects in d bits may appear in one of two ways. When the d bit value is 0, it means a cache line that operates at low voltage. Conversely, when the d bit value is 1, it means a cache line which does not function at low voltage. The first case is that the d bit is fixed to 1 and the corresponding cache line is disabled. In this case, a cache line in which all bits are functional except the d bit is disabled is disabled. It works correctly in this case. The second case is that the d bit is fixed at 0. In this case, the damaged d bit erroneously indicates that the cache line is functioning, which causes a problem when the cache line is defective. In order to function properly, embodiments of the present invention ensure that the d bit is not accidentally locked to zero. One way to solve this problem is to change the cell design to eliminate the possibility that the d bit has the above-mentioned failure. The second method is to add one or more redundant d bits. For example, three d bits may be used. All three bits are written to the same value (all ones or all zeros). If d bits are read out and any one bit is set to 1, it may be treated as a disabled cache line. Only d bits that are correctly read to include three 0's are treated as cache lines available at very low operating voltages. In this case, since the d bit error occurs only when all three bits are damaged, the d bit error probability is very low.FIGS. 3A and 3B are respectively voltage sorting state diagrams of d-bit tests performed during manufacturing and during POST (power on self test) according to some embodiments. More specifically, voltage sorting may be performed in one of two ways. First, voltage sorting may be performed when the processor is manufactured as shown in FIG. 3A. The d bit remains active after the power is turned off and on again, so that a fuse or other type of non-volatile memory, such as a BIOS (Basic Input Output System) memory or It is stored in package flash. Another method is to store d bits in additional bits included in a tag or status bit (for example, MESI (modified exclusive shared invalid) bit) associated with a cache line. . When storing the d bits in this manner, it is necessary to newly execute voltage sorting to regenerate the d bits each time the power is turned off. In addition, this method requires the processor to have the ability to perform memory tests on the memory structure in situ at low voltage. One possible way to implement this configuration is to use POST (set appropriate d bits) as shown in FIG. 3B. More specifically, FIG. 3B shows HFM (high frequency mode), LFM (high frequency mode) when the d bit is set by the POST and the d bit is regenerated each time the power is turned off and then on again. It is a figure for demonstrating a mode that the processor with four different states of low frequency mode), ULPM, and off changes from one state to another state. In addition, POST is performed each time the off state transitions to one of three on states.As described with reference to FIGS. 2A-3B, the cache can be configured to have different capacities depending on performance levels and different Vccmins depending on power budgets. In addition, according to some embodiments, it may be possible to design a component taking into account different power requirements from market to market. As a result, costs can be reduced as products designed to accommodate a wide variety of markets can be reduced.According to an embodiment, a bad cache entry does not discard the entry completely, but utilizes a non-bad bit of them. Also, even if Vccmin is low, even if the percentage of defective bits becomes moderate due to lowering Vcc in order to operate the cache, this is acceptable. This method has the effect of further improving the performance predictability, and ensures that two processors can obtain the same performance for any given program. The variation in performance is due to the fact that the defect location differs for each chip sample and the impact on performance is different.FIG. 4A is a schematic diagram for explaining the read operation in the cache according to the embodiment. The cache shown is a two-way set associative cache, each cache line comprising four sub-blocks. According to an embodiment, each cache line is extended with several bits stored with the cache tag (eg, as shown in FIG. 4A, bit 1011 is stored with tag 1 or bit 0111 is the tag) Stored with 2). Each cache line is logically divided into a plurality of subblocks. The size of the sub-block may be the same as the smallest part of the line protected by parity or ECC (error correction code). For example, if the content is protected by ECC with 64-bit granularity and the cache line has eight sub-blocks, whether each sub-block is used or not using eight additional bits Indicates The additional bits are all set except for the bits whose corresponding sub-blocks contain more bad bits than allowed. For example, for a SECDED (single error correction and double error detection) protection block, if two bad bits are included, the corresponding bits should be reset.The cache shown in FIG. 4A operates as described below. When an access is performed, the tags 402 and 403 are read out, and if necessary, data from all lines of the group 404 are retrieved. The address offset indicates which subblock is required. The offset 406 is used to specify the bits corresponding to the required subblocks for each cache line of the group. The cache tag is compared (eg, by comparators 408 and 410) to the required address. Tag hit 411 (the output from OR gate 412 based on the outputs of AND gates 414 and 422) may be obtained, but an additional bit corresponding to that sub block may indicate that the sub block is bad. In such a case, a false hit 418 (eg, the output from OR gate 420 based on the outputs of AND gates 416 and 424) is obtained. As correspondence in this case, the following can be mentioned. (I) Report as a miss because there is no data. (Ii) With the exception of cache lines, damaged data is updated to the upper cache level for write-back caching. Note that only valid subblocks need to be updated. The write-through cache excludes cache lines to prepare for loading, and updates the upper cache level to prepare for storage. (Iii) A cache line is marked as an MRU (shortest elapsed time since last usage) line in the group, and a block that holds necessary data when data is requested from the upper cache level It is very likely that it contains another cache line. If the possibility is low, but there is a bad sub-block at the same position of the selected cache line, the process is repeated and at least one cache line with no bad sub-block at the requested position will be in the group. For example, identify the cache line. All subblocks in the same position of a given group of cache lines are only if the percentage of bad bits is higher than the tolerance level (e.g. determined based on the threshold defined for a given design) I get an error.Thus, according to one embodiment, accesses to the cache may be treated as a miss even if the tag hits, as the additional bits specifying a portion of the cache line indicate a failure. As mentioned above, it should be understood that there is a way to disable any cache line using the d bit. Such a mechanism may be used to avoid the use of cache lines with bad dirty damage bits, false valid bits, bad tags. According to an embodiment, if the additional bit is a bad bit, the cache line is also marked as a bad cache line. Also, the additional mechanisms (eg, additional bits and comparison logic, and associated AND gates and OR gates) shown in FIG. 4A may, for example, add additional bits when operating at high Vcc. All may be set to '1' or omitted simply by ignoring additional bits.FIG. 4B is a block diagram illustrating address remapping logic according to an embodiment. Dynamic address remapping may be utilized (e.g., in a round robin fashion) such that a given address is mapped to different cache groups at different time intervals to account for performance variations. With such a configuration, given the program and the percentage of defective bits, there is almost no variation in performance among processors regardless of where the defective bits are located.As shown in FIG. 4B, an N-bit counter 452 may be used. Note that N may be any value from 1 to the number of bits necessary to specify a cache group. For example, in the case of a 32 KB cache having eight paths at 64 bytes per line, the number of groups is 64 and an index is allocated at six bits. Therefore, a counter of 6 bits or less is sufficient. According to the particular embodiment illustrated, a 4-bit counter 452 is utilized. The counter is updated periodically or occasionally (for example, every 10 million cycles). As for the N bits of the counter, an XOR operation is performed bit by bit by the XOR gate 454 and N bits out of the bits allocate an index to the group. Thus, according to an embodiment, a given address may be mapped to different cache groups from time to time.Also, address remapping may be performed either at cache access or at address calculation. The impact of latency should be small since the added XOR gate level is one and half of the inputs (as output from the counter) are preset. According to an embodiment, the contents of the cache are flushed whenever the counter is updated to avoid a mismatch. However, since the counters are rarely updated, the impact on performance is negligible. Also, the mechanism shown in FIG. 4B may be invalidated when operating at high Vcc simply by stopping the counter update.FIG. 5 is a flow chart illustrating a method 500 for disabling a portion of the cache during low voltage operation according to an embodiment of the present invention. According to some embodiments, as performing one or more of the operations described with reference to FIG. 5 using the various components described with reference to FIGS. 1-4B and 6-7 Good.Referring to FIGS. 1-5, at operation 502, it is determined (eg, by the logic or logic 202 shown in FIG. 4A) whether to receive or detect an access request for a portion of the cache. Once the access is received, it is determined in operation 504 whether a portion of the cache is operable below Vccmin, as described herein, for example, with reference to FIGS. 1-4B. If the determination at operation 504 is negative, then a miss is returned (eg, as described with reference to FIGS. 1-4B). If the determination at operation 504 is affirmative, then at operation 508, a hit is returned (eg, as described with reference to FIGS. 1-4B).FIG. 6 is a block diagram illustrating a computing system 600 according to an embodiment of the present invention. Computing system 600 may comprise one or more central processing units (CPUs) 602 or processors that communicate via an interconnect network (or bus) 604. The processor 602 may be a general purpose processor, a network processor (for processing data exchanged via computer network 603), or any other type of processor (a reduced instruction set computer (RISC) processor or a combined instruction set computer (CISC) May be included. Also, the processor 602 may have a single core configuration or a multi-core configuration. Processor 602 in a multi-core configuration may integrate multiple types of processor cores on the same integrated circuit (IC) die. Also, the processor 602 in a multi-core configuration may be implemented as a symmetric multiprocessor or an asymmetric multiprocessor. According to an embodiment, one or more of the processors 602 may be the same as or similar to processor 102 shown in FIG. For example, one or more of the processors 602 may include one or more of the caches described with reference to FIGS. 1-5. Also, the operations described with reference to FIGS. 1-5 may be performed by one or more of the components of system 600.Chipset 606 may also be in communication with interconnect network 604. Chipset 606 may include a memory control hub (MCH) 608. MCH 608 may include memory controller 610 in communication with memory 612. Memory 612 may be the same as or similar to memory 114 shown in FIG. The memory 612 may store data including an instruction sequence executed by the CPU 602 or any other device included in the computing system 600. According to one embodiment of the present invention, memory 612 is one or more volatile storage (or memory) devices, such as random access memory (RAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), static RAM (SRAM) or any other type of storage device may be included. Alternatively, a non-volatile memory such as a hard disk may be used. Other devices such as multiple CPUs and / or multiple system memories may also communicate via the interconnect network 604.MCH 608 may further include a graphics interface 614 in communication with display device 616. According to one embodiment of the present invention, graphics interface 614 may be in communication with display device 616 via an accelerated graphics port (AGP). According to embodiments of the present invention, a display 616 (such as a flat panel display) converts digital images stored in a storage device, such as, for example, video memory or system memory, into display signals that are interpreted and displayed by the display 616. It may communicate with the graphics interface 614 via a signal converter. Display signals generated by the display device may be transmitted to various control devices before being interpreted and displayed on the display 616.Hub interface 618 may allow MCH 608 and input / output control hub (ICH) 620 to communicate with one another. ICH 620 may be an interface to an I / O device in communication with computing system 600. The ICH 620 may communicate with the bus 622 via a peripheral bridge (or controller) 624. Peripheral bridge 624 may be, for example, a peripheral component interconnect (PCI) bridge, a universal serial bus (USB) controller, or any other type of peripheral bridge or controller. The bridge 624 may be a data path between the CPU 602 and peripherals. Other types of topologies may be used. Also, a plurality of buses may communicate with the ICH 620, for example, via a plurality of bridges or controllers. Furthermore, as other peripherals that communicate with the ICH 620, according to various embodiments of the present invention, IDE (Integrated Drive Electronics) or SCSI (Small Computer System Interface) based hard drive, USB port according to various embodiments of the present invention , Keyboard, mouse, parallel port, serial port, floppy disk drive, digital output support (e.g., digital video interface (DVI)) or other devices.Bus 622 may communicate via bus 622 with audio device 626, one or more disk drives 628, and other devices that may communicate with network interface device 630 (in communication with computer network 603). . Also, according to some embodiments of the present invention, various components (eg, network interface device 630) may be in communication with MCH 608. Also, the processor 602 and other components shown in FIG. 6 (including but not limited to MCH 608, one or more components of MCH 608, etc.) may be combined to form a single chip. Also, according to another embodiment of the present invention, the MCH 608 may be equipped with a graphics accelerator.Computing system 600 may also include volatile and / or non-volatile memory (or storage). Examples of non-volatile memory include read only memory (ROM), programmable ROM (PROM), erasable PROM (EPROM), electrical EPROM (EEPROM) (registered trademark), disk drive (eg 628), floppy Trademarks Disc, Compact Disc ROM (CD-ROM), Digital Versatile Disc (DVD), Flash Memory, Magneto-Optical Disc, or any other type of non-volatile capable of storing electronic data (eg including instructions) One or more of the sex machine readable media may be included.FIG. 7 is a diagram illustrating a computing system 700 configured in a point-to-point (PtP) manner according to an embodiment of the present invention. Specifically, FIG. 7 illustrates a system in which processors, memory, and input / output devices are interconnected by multiple point-to-point interfaces. The operations described with reference to FIGS. 1-6 may be performed by one or more components of system 700.As shown in FIG. 7, system 700 may include multiple processors, only two of which, 702 and 704, are shown for clarity. Processors 702 and 704 may have local memory controller hubs (MCHs) 706 and 708, respectively, to communicate with memories 710 and 712. Memory 710 and / or 712 may store various data as described with reference to memory 612 of FIG.According to an embodiment, processors 702 and 704 may be one of the plurality of processors 602 described with reference to FIG. 6, for example, one of the caches described with reference to FIGS. It may have the above. Processors 702 and 704 may exchange data via PtP interface 714 using point-to-point (PtP) interface circuits 716 and 718, respectively. Also, processors 702 and 704 exchange data with chipset 720 via corresponding PtP interfaces 722 and 724 using point-to-point interface circuits 726, 728, 730, and 732 respectively. You may The chipset 720 may further exchange data with the graphics circuit 734 via the graphics interface 736 using, for example, a PtP interface circuit 737.At least one embodiment of the present invention may be provided in processors 702 and 704. For example, one or more of the cores 106 shown in FIG. 1 may be disposed in the processors 702 and 704. However, other embodiments of the present invention may be provided in other circuits, logic units or devices included in the system 700 shown in FIG. 7. In addition, another embodiment of the present invention may be distributed to a plurality of circuits, a plurality of logic units, or a plurality of devices shown in FIG.Chipset 720 may communicate with bus 740 using PtP interface circuitry 741. Bus 740 may be in communication with one or more devices, such as bus bridge 742 and I / O devices 743. The bus bridge 742 can be a keyboard / mouse 745, a communication device 746 (eg, a modem, a network interface device, or any other communication device communicating with the computer network 603), an audio I / O device 747, and the like via the bus 744. And / or may communicate with another device, such as data storage device 748. Data storage device 748 may store code 749 executed by processors 702 and / or 704.According to various embodiments of the present invention, the processes described herein, for example, with reference to FIGS. 1-7, may be as hardware (eg, logic circuitry), software, firmware, or a combination thereof. It may be realized. For example, provided as a computer program product, such as a machine readable medium or a computer readable medium, storing instructions (or software procedures) utilized in programming a computer to perform the processes described herein. It is good. A machine readable medium may include a storage device as described herein.Also, tangible computer readable media as described above may be downloaded as a computer program product. In this case, the program is by data signal traveling the propagation medium from the remote computer (eg server) to the requesting computer (eg client) via the communications link (eg bus, modem or network connection) It may be transferred.In the specification, expressions such as “one embodiment”, “embodiment” or “some embodiments” are used, but at least one specific feature, structure or characteristic described in relation to the embodiment Is meant to be included in one embodiment. The phrase "one embodiment" is used repeatedly in the description, but not all refer to the same embodiment.Also, in the description and claims, the terms "coupled" and "connected" may be used. According to some embodiments of the present invention, "connected" may mean that two or more are in direct physical or electrical contact with each other. "Coupling" may mean that two or more are in direct physical or electrical contact. However, "coupled" may also mean that two or more are not in direct contact with one another, but cooperate or interact.Thus, while embodiments of the invention have been described using language specific to structural features and / or process steps, claimed subject matter is not limited to the specific features or steps described. I hope you understand it. Rather, the specific features and steps are disclosed as exemplary forms of implementing the claimed subject matter. |
Embodiments provide a method comprising providing a multi-memory die that comprises multiple individual memory dies. Each of the individual memory dies is defined as an individual memory die within a wafer of semiconductor material during production of memory dies. The multi-memory die is created by singulating the wafer of semiconductor material into memory dies where at least one of the memory dies is a multi-memory die that includes multiple individual memory dies that are still physically connected together. The method further comprises coupling a semiconductor die to the multi-memory die. |
Claims What is claimed is: 1. A method comprising: providing a multi-memory die that comprises multiple individual memory dies, wherein each of the individual memory dies is defined as an individual memory die within a wafer of semiconductor material during production of memory dies, and the multi-memory die is created by singulating the wafer of semiconductor material into memory dies where at least one of the memory dies is a multi-memory die that includes multiple individual memory dies that are still physically connected together; and coupling a semiconductor die to the multi-memory die. 2. The method of claim 1, wherein the semiconductor die comprises a System on a Chip. 3. The method of claim 1, wherein: the semiconductor die is coupled to the multi-memory die via an adhesive such that the semiconductor die is located between input/output pads located on two of the individual memory dies; and the method further comprises coupling the semiconductor die to the multi-memory die via a wire bonding process that couples the semiconductor die to the input/output pads. 4. The method of claim 3, further comprising coupling the multi-memory die to a substrate via one of a flip chip process or a wire bonding process. 5. The method of claim 1, wherein the semiconductor die is coupled to the multi- memory die via a flip chip process. 6. The method of claim 5, further comprising coupling the multi-memory die to a substrate via one of a flip chip process or a wire bonding process. 7. The method of claim 6, wherein: the multi-memory die is coupled to the substrate via a flip chip process; and the method further comprises coupling the semiconductor die to the substrate via through silicon vias defined within the multi-memory die. 8. The method of claim 7, further comprising providing a passivation material between the multi-memory die and the substrate. 9. The method of claim 8, further comprising providing a molded body over at least portions of (i) a top surface of the semiconductor die, (ii) a top surface of the multi-memory die, and (iii) a top surface of the substrate. 10. The method of claim 1, wherein the semiconductor die comprises another multi- memory die. 11. An apparatus comprising: a multi-memory die that comprises multiple individual memory dies, wherein each of the individual memory dies is defined as an individual memory die within a wafer of semiconductor material during production of memory dies, and the multi-memory die is created by singulating the wafer of semiconductor material into memory dies where at least one of the memory dies is a multi-memory die that includes multiple individual memory dies that are still physically connected together; and a semiconductor die to the multi-memory die. 12. The apparatus of claim 11, wherein the semiconductor die comprises a system on a chip. 13. The apparatus of claim 11, wherein:the semiconductor die is coupled to the multi-memory die via an adhesive such that the semiconductor die is located between input/output pads located on two of the individual memory dies; and the semiconductor die is coupled to the multi-memory die via a wire bonding process that couples the semiconductor die to the input/output pads. 14. The apparatus of claim 13, further comprising a substrate coupled to the multi- memory die via one of a flip chip process or a wire bonding process. 15. The apparatus of claim 11, wherein the semiconductor die is coupled to the multi- memory die via a flip chip process. 16. The apparatus of claim 15, further comprising a substrate coupled to the multi- memory die via one of a flip chip process or a wire bonding process. 17. The apparatus of claim 16, wherein: the multi-memory die is coupled to the substrate via a flip chip process; and the semiconductor die is coupled to the substrate via through silicon vias defined within the multi-memory die. 18. The apparatus of claim 17, further comprising a passivation material between the multi-memory die and the substrate. 19. The apparatus of claim 18, further comprising a passivation layer on at least portions of (i) a top surface of the die, (ii) a top surface of the multi-memory die, and (iii) a top surface of the substrate. 20. The method of claim 11, wherein the semiconductor die comprises another multi- memory die. |
METHODS AND ARRANGEMENTS RELATING TO SEMICONDUCTOR PACKAGES INCLUDING MULTI-MEMORY DIES Cross-References to Related Applications This disclosure claims priority to U.S. Patent Application No. 13/532,444, filed June 25, 2012, which claims priority to U.S. Provisional Patent Application No. 61/501,672, filed June 27, 2011, and U.S. Provisional Patent Application No. 61/642,364, filed May 3, 2012, the entire disclosures of which are hereby incorporated by reference in their entireties except for those sections, if any, that are inconsistent with this disclosure. Technical Field Embodiments of the present disclosure relate to the field of integrated circuits, and more particularly, to techniques, structures, and configurations for semiconductor chip packaging. Background The background description provided herein is for the purpose of generally presenting the context of the disclosure. Work of the presently named inventor(s), to the extent the work is described in this background section, as well as aspects of the description that may not otherwise qualify as prior art at the time of filing, are neither expressly nor impliedly admitted as prior art against the present disclosure. The size of semiconductor dies continues to decrease. More particularly, semiconductor dies that include System On Chip (SOC) configurations and semiconductor dies that are configured as memory dies such as, for example, dynamic random access memory (DRAM) dies are becoming increasingly smaller. Such decreasing of sizes of thesemiconductor dies can lead to problems in stacking the dies within various semiconductor packaging arrangements. Summary In various embodiments, there is provided a method that comprises providing a multi-memory die that comprises multiple individual memory dies. Each of the individual memory dies is defined as an individual memory die within a wafer of semiconductor material during production of memory dies. The multi-memory die is created by singulating the wafer of semiconductor material into memory dies where at least one of the memory dies is a multi-memory die that includes multiple individual memory dies that are still physically connected together. The method further comprises coupling a semiconductor die to the multi-memory die. The present disclosure also provides an apparatus that comprises a multi-memory die that comprises multiple individual memory dies. Each of the individual memory dies is defined as an individual memory die within a wafer of semiconductor material during production of memory dies. The multi-memory die is created by singulating the wafer of semiconductor material into memory dies where at least one of the memory dies is a multi- memory die that includes multiple individual memory dies that are still physically connected together. The apparatus further comprises a semiconductor die coupled to the multi- memory die.Brief Description of the Drawings Embodiments will be readily understood by the following detailed description in conjunction with the accompanying drawings. Embodiments are illustrated by way of example and not by way of limitation in the figures of the accompanying drawings. Figs. 1A and IB schematically illustrate an example of a wafer of semiconductor material configured with memory dies. Figs. 2A and 2B schematically illustrate a semiconductor die coupled to a multi-memory die comprising multiple individual dynamic random access memory (DRAM) dies. Fig. 2C and 2D schematically illustrate top views of examples of a semiconductor die coupled to a multi-memory die comprising multiple individual memory dies. Figs. 3A and 3B schematically illustrate examples of packaging arrangements for a semiconductor die coupled to a multi-memory die comprising multiple individual memory dies. Figs. 4A and 4B schematically illustrate further examples of packaging arrangements for a semiconductor die coupled to a multi-memory die comprising multiple individual memory dies. Figs. 5A-5C schematically illustrate further examples of packaging arrangements for a semiconductor die coupled to a multi-memory die comprising multiple individual memory dies. Figs. 6A and 6B schematically illustrate further examples of packaging arrangements for a semiconductor die coupled to a multi-memory die comprising multiple individual memory dies.Fig. 7 schematically illustrates an example of a packaging arrangement for a multi- memory die comprising multiple individual memory dies. Fig. 8A schematically illustrates a top view of an example of a packaging arrangement including two multi-memory dies each comprising two individual memory dies. Fig. 8B schematically illustrates a perspective view of the packaging arrangement illustrated in Fig. 8A. Fig. 9 schematically illustrates a top view of each of the first and second multi- memory dies of the packaging arrangement of Figs. 8A and 8B. Fig. 10 schematically illustrates a top view of a semiconductor die coupled to two multi-memory dies each comprising multiple individual memory dies. Fig. 11 schematically illustrates a side view of the packaging arrangement of Figs. 8A and 8B. Fig. 12 schematically illustrates a top view of a semiconductor package that includes the packaging arrangement of Figs. 8A and 8B. Fig. 13 schematically illustrates a side view of the semiconductor arrangement of Fig. 12. Fig. 14 illustrates an example of a method for creating a packaging arrangement that comprises a semiconductor die coupled to a multi-memory die comprising multiple individual memory dies. Detailed Description Fig. 1A schematically illustrates an example of a wafer 100 of semiconductor material that has been configured into a plurality of semiconductor dies 102 for production of semiconductor dies. The wafer 100 is singulated or divided, for example by cutting with alaser, into individual semiconductor dies 102 in order to provide a plurality of individual semiconductor dies 102 that have been physically separated from each other. In accordance with an embodiment, the plurality of individual semiconductor dies 102 are configured as individual memory dies 102, and more particularly, as individual dynamic random access memory (DRAM) dies 102. However, other types of memory dies are possible, as well as other types of semiconductor dies, and thus, the example of DRAM memory dies 102 is not meant to be limiting. Fig. IB schematically illustrates the wafer 100 of semiconductor material configured into a plurality multi-memory dies 104. As can be seen, each multi-memory die 104 includes multiple individual memory dies 102 that are still physically connected, i.e., they have not been singulated or separated from each other. Examples of multi-memory dies 104 can include arrangements of individual memory dies 102 of one by two, two by two, two by three, etc. These examples are not meant to be limiting. Referring to Figs. 2A and 2B, an example of a packaging arrangement 200 is illustrated wherein a semiconductor die 202, configured as a System on a Chip (SOC) semiconductor die, is coupled to a multi-memory die 204. In the example, the arrangement 200 includes two individual DRAM dies 102 that are still physically connected, i.e. a one by two configuration. The SOC semiconductor die 202 is disposed on top of the multi-memory die 204 at or near a border 206 between two individual DRAM dies 102. As can be seen in Figs. 2A and 2B, in general, DRAMs include input/output (I/O) pads or pins 208 in the center of an individual DRAM die 102. Thus, as can be seen in Figs. 2A and 2B, the SOC semiconductor die 202 is located at or near the border 206 and between the I/O pads 208. Fig. 2A illustrates the SOC semiconductor die 202 coupled to the multi-memory die 204 via adhesive 212 and then coupled to the I/O pads 208 via a wire bonding process using wires210. Fig. 2B illustrates the SOC semiconductor die 202 coupled to the multi-memory die 204 via a flip chip process and thus, in such an arrangement, the SOC semiconductor die 202 is not coupled to the I/O pads 208 of the multi-memory die 204 via wirebonding but is coupled to the multi-memory die 204 via solder balls 214 and bond pads (not illustrated). Fig. 2C and 2D are top views illustrating an example of packaging arrangement 200 where the SOC semiconductor die 202 is disposed on a multi-memory die 204 where the multi-memory die 204 includes an arrangement of individual memory dies other than a one x two arrangement of individual memory dies 102. For example, Fig. 2C illustrates the SOC semiconductor die 202 disposed on a multi-memory die 204 that includes four individual memory dies 102a-d arranged in a two by two configuration. The SOC semiconductor die 202 can be connected to the multi-memory die 204 via suitable connections such as, for example, solder balls and bond pads (not illustrated), a wire bonding process, etc. Fig. 2D illustrates another example of packaging arrangement 200 where the SOC semiconductor die 202 is disposed on a multi-memory die 204 that includes six individual memory dies 102a-f arranged in a two by three configuration. The semiconductor die 202 can be connected to the multi-memory die 204 via suitable connections such as, for example, solder balls and bond pads (not illustrated), a wire bonding process, etc. Fig. 3A illustrates an embodiment of a packaging arrangement 300a in accordance with various aspects of the present disclosure. In the packaging arrangement 300a, a semiconductor die 302 is coupled to a multi-memory die 304 that includes two individual memory dies 102 that are still physically connected. More individual memory dies 102 can be included if desired. In an embodiment, the individual memory dies 102 are DRAM dies. In accordance with various embodiments, the semiconductor die 302 is configured as a System on a Chip (SOC) semiconductor die 302. In the packaging arrangement 300aillustrated in Fig. 3A, the SOC semiconductor die 302 is coupled to a bottom surface 306 of the multi-memory die 304 via a flip chip process and thus, the multi-memory die 304 is physically coupled to the semiconductor die 302 via solder balls 308a. The solder balls 308a are coupled to bond pads (not illustrated) located on the multi-memory die 304 and bond pads (not illustrated) located on the SOC semiconductor die 302. Underfill material 312a is provided between the SOC semiconductor die 302 and the multi-memory die 304. A redistribution layer or fan-out layer (not illustrated) in the multi-memory die 304 is utilized to move or "fan out" the signals to and from the SOC semiconductor die 302 to bond pads 314a on the multi-memory die 304. Solder balls 316a are utilized at the bond pads 314a to couple the packaging arrangement 300a to another package or to a printed circuit board (PCB) (not illustrated) and thereby transmit signals between the packaging arrangement 300a and another package (not illustrated). Thus, the multi-memory die 304 serves as a substrate within the packaging arrangement 300a. While Fig. 3A illustrates the SOC semiconductor die 302 coupled to the multi- memory die 304 along the bottom surface 306 of the multi-memory die 304, the SOC semiconductor die 302 may be coupled to a top surface 318 of the multi-memory die 304 if desired. Fig. 3B illustrates an example of a packaging arrangement 300b similar to that illustrated in Fig. 3A, wherein the SOC semiconductor die 302 is coupled to the top surface 318 of the multi-memory die 304. The SOC semiconductor die 302 is flip chip attached to the top surface 318 of the multi-memory die 304 via solder balls 308b. The solder balls 308b are coupled to bond pads (not illustrated) on the top surface 318 of the multi-memory die 304 and bond pads (not illustrated) located on the SOC semiconductor die 302. Underfill material 312b is provided between the multi-memory die 304 and the SOC semiconductor die 302.In the embodiment illustrated in Fig. 3A, in conjunction with a redistribution layer or fan-out layer (not illustrated), signals from the SOC semiconductor die 302 are moved or transmitted through the multi-memory die 304 via through silicon vias (TSVs) 320. Solder balls 316b can be utilized at end points of the TSVs 320 that terminate in bond pads 314b to couple the packaging arrangement 300b to another package or PCB (not illustrated). Either packaging arrangement 300a or 300b can be configured with TSVs 320 and/or with a redistribution layer (not illustrated) in the multi-memory die 304. Figs. 4A and 4B illustrate packaging arrangements 400a and 400b, respectively. In the packaging arrangement 400a of Fig. 4A, a semiconductor die 402 is coupled to a top surface 403 of a multi-memory die 404 that includes two individual memory dies 102 that are still physically connected. More individual memory dies 102 can be included if desired. In accordance with an embodiment, the semiconductor die 402 is an SOC semiconductor die. The SOC semiconductor die 402 is coupled to the top surface 403 of the multi- memory die 404 via an adhesive 406. The multi-memory die 404 is coupled to a substrate 408 via an adhesive 410. In the packaging arrangement 400a illustrated in Fig. 4A, a wirebonding process is used to further couple the SOC semiconductor die 402 to the multi- memory die 404, and the multi-memory die 404 to the substrate 408. Wires 412 are coupled to bond pads 414 on the SOC semiconductor die 402 and bond pads 416 on the multi-memory die 404. The wirebonding between the SOC semiconductor die 402 and the multi-memory die 404 can be similar to that described with respect to Fig. 2A, especially in an embodiment where the individual memory dies 102 are DRAM dies. Wires 418 are coupled to bond pads 420 on the multi-memory die 404 and bond pads 422 on the substrate 408. The wires 412 and 418 can be used to transmit signals between the SOCsemiconductor die 402, the multi-memory die 404 and the substrate 408, along with a redistribution layer (not illustrated) in the multi-memory die 404. The SOC semiconductor die 402 in Fig. 4A may also be coupled directly to the substrate 408 via a wire bonding process. The SOC semiconductor die 402 may be coupled via a wire 413 coupled to a bond pad 415 on the SOC semiconductor die 402 and a bond pad 417 located on the substrate 408. Multiple wires 413, bond pads 415 and 417 may be included to provide multiple connections between the SOC semiconductor die 402 and the substrate 408 if desired. Additionally, the substrate 408 can be coupled to another package or a PCB (not shown) via solder balls 424. The packaging arrangement 400b illustrated in Fig. 4B is similar to that illustrated in Fig. 4A. However, the SOC semiconductor die 402 is attached to the multi-memory die 404 via a flip chip attach process. Thus, signals between the SOC semiconductor die 402 and the multi-memory die 404 are transmitted via solder balls 407 located between bond pads (not illustrated) located on the SOC semiconductor die 402 and bond pads (not illustrated) located on the multi-memory die 404. Underfill material (not illustrated) may be included between the SOC semiconductor die 402 and the multi-memory die 404 if desired. In the packaging arrangement 400b illustrated in Fig. 4B, the multi-memory die 404 is coupled to a substrate 408 via an adhesive 410 and a wire bonding process similar to that with respect to Fig. 4A. Thus, wires 418 are coupled to bond pads 420 on the multi-memory die 404 and bond pads 422 on the substrate 408. The wires 418 can be used to transmit signals between the multi-memory die 404 and the substrate 408. Signals between the SOC semiconductor die 402 and the substrate 408 can be transmitted through a redistribution layer (not illustrated) in the multi-memory die 404, and the flip chip connection betweenthe SOC semiconductor die 402 and the multi-memory die 404. The substrate 408 can be coupled to another package or a PCB (not shown) via solder balls 424. In packaging arrangements 400a and 400b, the substrate 408 further includes routing structures 426, 428, 430, 432, and 434. The routing structures 426, 428, 430, 432, and 434 generally comprise an electrically conductive material, e.g., copper, to route electrical signals through the substrate 408. As illustrated, the routing structures 426, 428, 430, 432, and 434 can include line- type structures to route the electrical signals within a layer of the substrate 408 and/or via- type structures to route the electrical signals through a layer of the substrate 408. In other embodiments, routing structures 426, 428, 430, 432, and 434 can include other configurations in addition to or in lieu of those depicted here. While a particular configuration of routing structures has been briefly described and illustrated for the substrate 408, other configurations of routing structures may be used within substrate 408. As previously noted, solder balls 424 can be coupled to the substrate 408 at bond pads and TSVs of the routing structure and utilized to couple the packaging arrangements 400a and 400b to another package or to a PCB (not shown). A molded body 436 can be included in the packaging arrangements 400a and 400b if desired. Figs. 5A and 5B illustrate packaging arrangements 500a and 500b, respectively, that are similar to the packaging arrangement 400b illustrated in Fig. 4B. However, in the packaging arrangements 500a and 500b illustrated in Figs. 5A and 5B, a multi-memory die 504 includes TSVs 510 for routing signals to and from a semiconductor die 502 through the multi-memory die 504 in conjunction with a redistribution layer (not illustrated). The multi- memory die 504 includes two individual memory dies 102 that are still physically connected.More individual memory dies 102 can be included if desired. In accordance with an embodiment, the semiconductor die 502 is an SOC semiconductor die. The SOC semiconductor die 502 is coupled to the multi-memory die 504 via a flip chip attach process. Thus, signals between the SOC semiconductor die 502 and the multi- memory die 504 are transmitted via solder balls 506 located between bond pads (not illustrated) located on the SOC semiconductor die 502 and bond pads (not illustrated) on the multi-memory die 504. The packaging arrangement 500a includes a molded body 512 that can be utilized to protect the various components of the packaging arrangement 500a. The multi-memory die 504 is coupled to the substrate via solder balls 514. Underfill material (not illustrated) can be included between the multi-memory die 504 and the substrate 508 if desired. In the packaging arrangement 500b of Fig. 5B, no molded body 512 is provided within the packaging arrangement. Underfill material 516 is included between the multi- memory die 504 and the substrate 508. As with the packaging arrangement 500a of Fig. 5A, the multi-memory die 504 includes TSVs 510 for routing signals to and from the SOC semiconductor die 502 through the multi-memory die 504 as opposed to a redistribution layer (although the multi-memory die 504 may still include a redistribution layer if desired). The SOC semiconductor die 502 is coupled to the multi-memory die 504 via a flip chip attach process. Thus, signals between the SOC semiconductor die 502 and the multi-memory die 504 are transmitted via solder balls 506 located between bond pads (not illustrated) located on the SOC semiconductor die 502 and bond pads (not illustrated) on the multi-memory die 504. As in the packaging arrangements 400a and 400b of Figs. 4A and 4B, the substrate 508 further includes routing structures 526, 528, 530, 532, and 534. The routing structures526, 528, 530, 532, and 534 generally comprise an electrically conductive material, e.g., copper, to route electrical signals through the substrate 508. As illustrated, the routing structures 526, 528, 530, 532, and 534 can include line- type structures to route the electrical signals within a layer of the substrate 508 and/or via- type structures to route the electrical signals through a layer of the substrate 508. In other embodiments, the routing structures 526, 528, 530, 532, and 534 can include other configurations in addition to or in lieu of those depicted here. While a particular configuration of routing structures has been briefly described and illustrated for the substrate 508, other configurations of routing structures may be used within substrate 508. Solder balls 524 can be coupled to the substrate 508 at bond pads and TSVs of the routing structures and utilized to couple the packaging arrangements 500a and 500b to another package or to a PCB (not shown). Fig. 5C illustrates another example of a packaging arrangement 500c. In the embodiment illustrated in Fig. 5C, a multi-memory die 504 is disposed on a heat sink 540. The multi-memory die 504 is coupled to the heat sink 540 via adhesive, epoxy, etc. (not illustrated). An SOC semiconductor die 502 is disposed on the multi-memory die 504. The SOC semiconductor die 502 is coupled to the multi-memory die 504 via adhesive, epoxy, etc. (not illustrated). Wires 508 are used to couple the SOC semiconductor die 502 to the multi- memory die 504 via bond pads (not illustrated). Two substrates 514 are coupled to the multi-memory die 504 on each side of the multi-memory die 504. Wires 516 are used to couple the SOC semiconductor 502 to the substrates 514 via bond pads (not illustrated). The multi-memory die 504 may also be coupled to the substrates 514 directly by wires 518 (bond pads not illustrated). The multi-memory die 504 may be coupled to the substrates 514 through the SOC semiconductor die 502 due to the wire bond process that couples theSOC semiconductor die 502 to the substrates 514. The multi-memory die 504 may also be coupled to the substrates 514 via solder balls and bond pads (not illustrated). Alternatively, the multi-memory die 504 may be coupled to the substrates 514 via a wire bonding process (not illustrated), with additional physical coupling to the substrates 514 via, for example, an adhesive, epoxy, solder balls, etc. (not illustrated). Solder balls 520 are provided to allow the packaging arrangement 500c to be coupled to, for example, a substrate or printed circuit board (PCB), another packaging arrangement, etc. (not illustrated). Figs. 6A and 6B illustrate packaging arrangements 600a and 600b, respectively, that include a multi-memory die 604 that includes two individual memory dies 102 that are still physically connected. More individual memory dies 102 can be included if desired. In the packaging arrangement 600a, the multi-memory die 604 is coupled to a substrate 608 via adhesive 610. Windows 612 are defined within the substrate 608 such that bond pads 614 on the multi-memory die 604 can be coupled to the substrate 608 via a wire bonding process. Thus, wires 616 couple bond pads 614 of the multi-memory die 604 to bond pads 618 located on the substrate 608. A passivation material 620 can be provided to protect the wire bond connections. A molded body 634 can be provided if desired. The packaging arrangement 600a can be coupled to another package 630 via solder balls 624. Such a package 630 can be any other type of package such as, for example, a processor package, a memory package, a System on Chip package, etc. The resulting overall package can then be coupled to another package or to a PCB (not shown) via solder balls 632. Fig. 6B illustrates a packaging arrangement 600b similar to packaging arrangement 600a illustrated in Fig. 6A. The packaging arrangement 600b includes multiple, multi-memory dies 604 stacked atop one another. The multi-memory dies 604 electrically communicate with each other via TSVs 626 and/or redistribution layers (not illustrated). The bottom multi-memory die 604a passes signals to and from the substrate 608 via the wire bond arrangement between the bottom multi-memory die 604a and the substrate 608. The substrate 608 generally includes a redistribution layer (not illustrated). A molded body 634 can be provided if desired. Fig. 7 illustrates a packaging arrangement 700 that includes multiple multi-memory dies 704 stacked atop one another and coupled to a package 720 via solder balls 706. TSVs 710, along with redistribution layers (not illustrated) within the multi-memory dies 704, are utilized to transmit signals among the multiple multi-memory dies 704 and to the second package 720. Each multi-memory die 704 includes two individual memory dies 102 that are still physically connected. More individual memory dies 102 can be included if desired. While Fig. 7 illustrates only two multi-memory dies 704 stacked on top of each other, depending upon the design and application for the packaging arrangement 700, the packaging arrangement 700 may include more than two multi-memory dies 704 stacked atop one another. A molded body (not illustrated) may be provided around the stacked multi-memory dies 704 if desired. The resulting overall package can then be coupled to another package or to a PCB (not shown) via solder balls 732. Referring to Fig. 8A, a top view of an example of a packaging arrangement 800 is illustrated. Fig. 8B provides a perspective view of the packaging arrangement 800. The packaging arrangement 800 includes a semiconductor die 802. In an embodiment, the semiconductor die 802 is configured as an SOC semiconductor die 802. The packaging arrangement 800 also includes a first multi-memory die 804a and a second multi-memory die 804b. In one embodiment, each of the first and second multi-memory dies 804a, 804bincludes two individual memory dies 102, as previously described, in a one by two configuration. In accordance with an embodiment, the individual memory dies 102 are dynamic random access memory (DRAM) dies. Connections are provided by bond wires 806 between the SOC semiconductor die 802 and the multi-memory dies 804a, 804b via bond pads (not illustrated) located on both the SOC semiconductor die 802 and the multi-memory dies 804a, 804b. Fig. 9 illustrates a top view of each of the first and second multi-memory dies 804a, 804b, where each of the multi-memory dies 804a, 804b includes two individual memory dies 102 arranged in a one by two configuration. In other embodiments, the first and second multi-memory dies 804a, 804b may each include more than two memory dies 102. For example, each of the first and second multi-memory dies 804a, 804b may include four dies 102a-d arranged in a two by two configuration as illustrated in Fig. 10. Other possible configurations for the first and second multi-memory dies 804a, 804b are also previously described herein, e.g. six dies in a two by three configuration, three dies in a one by three configuration, etc. Additionally, the multi-memory dies 804a, 804b may each have a different number of individual memory dies 102 and/or a different configuration with respect to each other. Referring back to Figs. 8A and 8B, the first multi-memory die 804a is stacked on top of the second multi-memory die 804b forming a cross-like configuration. The SOC semiconductor die 802 is stacked on top of the first multi-memory die 804a. As previously noted, connections are provided by bond wires 806 between the SOC semiconductor die 802 and the multi-memory dies 804a, 804b via bond pads (not illustrated) located on both the SOC semiconductor die 802 and the multi-memory dies 804a, 804b. The SOCsemiconductor die 802 and the first and second multi-memory dies 804a, 804b can be coupled to each other via an adhesive, epoxy, solder balls, etc. (not illustrated). Fig. 11 is a side view of the packaging arrangement 800 illustrated in Figs. 8A and 8B as viewed from the direction AA. For simplicity of illustration, connections from the SOC semiconductor die 802 to the multi-memory dies 804a, 804b are omitted. Referring to Fig. 12, the packaging arrangement 800 illustrated in Figs. 8A and 8B may further be combined with a substrate or printed circuit board (PCB) (hereinafter substrate) 1202 to provide a semiconductor package 1200. Fig. 12 provides a top view of the semiconductor package 1200. The semiconductor package 1200 includes the substrate 1202 and the packaging arrangement 800. The substrate 1202 includes rows of solder balls (or a ball grid array) 1204 and an opening 1206, which provides access to the SOC semiconductor die 802. The packaging arrangement 800 is located under the substrate 1202, as illustrated by the dash lines representing the first and second multi-memory dies 804a, 804b. Sections of the ball grid array 1204 may be divided to specifically handle signals from corresponding individual memory dies 102 in the first and second multi-memory dies 804a, 804b. The ball grid array 1204 can be used to couple the semiconductor package 1200 to another packaging arrangement or a substrate or printed circuit board (PCB) (not illustrated). For simplicity of illustration, connections from the packaging arrangement 800 to the substrate 1202 are not illustrated in Fig. 12. Fig. 13 illustrates a cross-sectional view of the semiconductor package 1200 illustrated in Fig. 12. Through the opening 1206, connections can be established between the packaging arrangement 800 and the substrate 1202. In one embodiment, connections are provided between the first and second multi-memory dies 804a, 804b and the SOCsemiconductor die 802 via wires 1302 (bond pads not illustrated), and connections are provided between the SOC semiconductor die 802 and the substrate 1202 via wires 1304 (bond pads not illustrated). Connections may also be provided between the first and second multi-memory dies 804a, 804b and the substrate 1202 via wires 1306 (bond pads not illustrated and wires between multi-memory die 804b and substrate 1202 not illustrated). Signals from the first and second multi-memory dies 804a, 804b can be first routed to the SOC semiconductor die 802. The SOC semiconductor die 802 can in turn re-route such signals to the substrate 1202. The reverse routing is also possible such that signals from the substrate 1202 can be first routed to the SOC semiconductor die 802 and the SOC semiconductor die 802 can in turn re-route such signals to the first and second multi- memory dies 804a, 804b. Other circuitry components may be further connected to the substrate 1202. In another embodiment (not illustrated), signals from the first and second multi-memory dies 804a, 804b may be routed directly to and from the substrate 1200 via the opening 1206 via wires and bond pads without going through the SOC semiconductor die 802. Such an embodiment may require a bigger opening 1206 than what is relatively illustrated in Fig. 12. Thus, the size of the opening 1206 illustrated in Fig. 12 is merely an example and is not meant to be limiting. It should be noted that while the SOC semiconductor die 802 is described in connection with the packaging arrangement 800 and the semiconductor package 1200, the SOC semiconductor die 802 need not be included. For example, the semiconductor package 1200 as shown in Figs. 12 and 13 may omit the SOC semiconductor die 802. Such an embodiment would thus include only the substrate 1202 and the first and second multi- memory dies 804a, 804b. In this particular configuration, signals are routed via directconnections, e.g. wires and bond pads, between the first and second multi-memory dies 804a, 804b and the substrate 1202 through the opening 1206. Fig. 14 is a process flow diagram of a method 1400 to create a packaging arrangement that includes a multi-memory die. At 1402, the method 1400 includes providing a multi-memory die that comprises multiple individual memory dies. Each of the individual memory dies is defined as an individual memory die within a wafer of semiconductor material during production of memory dies. The multi-memory die is created by singulating the wafer of semiconductor material into memory dies where at least one of the memory dies is a multi-memory die that includes multiple individual memory dies that are still physically connected together. At 1404, the method includes coupling a semiconductor die to the multi-memory die. As previously noted, while the various packaging arrangements include multi- memory dies 204, 304, 404, 504, 604, 704, and 804a, 804b that are illustrated with only two individual memory dies 102 that are still physically connected, depending upon the design and the application for the packaging arrangements, the multi-memory dies 204, 304, 404, 504, 604, 704 and 804a, 804b may include more than two individual memory dies 102 that are still physically connected. Additionally, the multi-memory dies 204, 304, 404, 504, 604 704 and 804a, 804b within a particular packaging arrangement 200, 300, 400, 500, 600 700 and 800 may each have a different number of individual memory dies 102 and/or a different configuration with respect to each other. Likewise, while the semiconductor dies 202, 302, 402, 502 and 802 are primarily described as SOC semiconductor dies, other types of semiconductor dies may be used. Furthermore, details with respect to various operations and materials for creating the various components and coupling the various componentstogether have been omitted for clarity and to avoid implying any types of limitations since such details are well known. In embodiments where the multi-memory dies are DRAMs, the connection between the SOC semiconductor die and the DRAM l/O's are very short. This can allow the interface speed between the DRAM and the SOC to be run at the maximum possible speed without using any power hungry resistive termination. Additionally, a minimum of two channels of DRAM interface can be achieved naturally without any congestion due to the natural separation of the DRAM interface. Also, the control channels of the DRAM can be shared naturally if two channels are not needed. This allows for the achievement of twice the bandwidth naturally. Using the multi-memory dies as a substrate allows the substrate to serve as a good heat sink for the SOC semiconductor die if chip to chip bonding is used. The multi-memory die can be used as a fan out substrate for an ultra low cost wire bond substrate. Even though the heat dissipation capability is reduced compared to exposed multi-memory die, the multi-memory die still creates a huge improvement in the heat dissipation capability of the SOC semiconductor die as the multi-memory die effectively acts as built-in heat spreader inside the packaging arrangements. Embodiments of the present disclosure allow for a natural low cost package on package (POP) packaging for the multi-memory die. This can be done by using a dual window opening of the ball grid array substrate (e.g., the embodiments of Figs. 6A and 6B). The I/O pins of the multi-memory die (e.g., a DRAM multi-memory die) can be routed to the edges of the multi-memory package where the multi-memory packages can be stacked on an SOC package. This creates a more natural (direct) connection between the SOC die and the DRAMs, thus promoting a much higher speed connection between the SOC die and theDRAM die(s) compared to traditional LP-DDR I/O placement on four sides of a DRAM package instead of the two sides of a dual core DRAM die (e.g., two individual DRAM dies in a multi-memory DRAM die) POP package. The two sided DRAM connection for a dual core DRAM POP also results in much lower parasitic capacitances, thus further promoting running the interface at even higher clock frequencies even when several DRAM dies are stacked on top of each other using TSV processes. Ultimately, DRAM POP packaging may be eliminated by routing the I/O of the DRAM's to the outer edges of the dual core DRAM die and place solder balls right on top of the dual core DRAM die. This has a major benefit of reducing the packaging cost drastically while allowing very high ball density with a practically zero cost increase. This can be even more useful when the DRAM die is designed with more than a 16b wide interface (e.g., 32b). This is also extremely attractive when a substrate of the SOC semiconductor die packaging is already designed for high density flip chip mounting of the SOC semiconductor die. In such a situation, the POP is simply two semiconductor dies on top of each other. The one that is closest to the substrate (the center) is the SOC semiconductor die and the one furthest away from the substrate is the DRAM die. Both semiconductor dies are exposed to allow heat dissipation capability. The spacing in between the SOC semiconductor die and the DRAM semiconductor die may be filled with thermally conductive material to allow the heat from the SOC semiconductor die to transfer to the DRAM semiconductor die. A back of the DRAM semiconductor die may be matted to a heatsink to reduce the DRAM semiconductor die temperature and ultimately the SOC semiconductor die temperature. Various operations may have been described as multiple discrete actions or operations in turn, in a manner that is most helpful in understanding the claimed subject matter. However, the order of description should not be construed as to imply that theseoperations are necessarily order dependent. In particular, these operations may not be performed in the order of presentation. Operations described may be performed in a different order than the described embodiment. Various additional operations may be performed and/or described operations may be omitted in additional embodiments. The description may use the terms "embodiment" or "embodiments," which may each refer to one or more of the same or different embodiments. Furthermore, the terms "comprising," "including," "having," and the like, as used with respect to embodiments, are synonymous. Although certain embodiments have been illustrated and described herein, it will be appreciated by those of ordinary skill in the art that a wide variety of alternate and/or equivalent embodiments or implementations calculated to achieve the same purposes may be substituted for the embodiments illustrated and described without departing from the scope. Those with skill in the art will readily appreciate that embodiments may be implemented in a very wide variety of ways. This application is intended to cover any adaptations or variations of the embodiments discussed herein. Therefore, it is manifestly intended that embodiments be limited only by the claims and the equivalents thereof. |
A method and apparatus configure a trusted domain and a plurality of isolated domains in a processor core. Each isolated domain is assigned a unique domain identifier. One or more resources are associated with each isolated domain. The associations are stored as permissions to access physical addresses of resources. Code to be executed by a hardware device is assigned to one of the isolated domains. The domain identifier for the assigned isolated domain is written to the hardware device. When the hardware device executes the code, each instruction is logically tagged with the domain identifier written to the hardware device. An instruction includes request to access a physical address. The hardware device compares the domain identifier of the instruction with the permissions. If the permissions allow the domain identifier to access the physical address, then access to the resource at the physical address is allowed. Access is otherwise blocked. |
CLAIMS We Claim: 1. A method for providing security within non-trusted domains comprising: configuring a trusted domain and a plurality of isolated domains and assigning each of the isolated domains a unique domain identifier; associating one or more resources with each of the isolated domains and storing the associations as permissions to access physical addresses of the resources; assigning to one of the isolated domains code to be executed by a hardware device; and writing the unique domain identifier for the assigned isolated domain to the hardware device, wherein during execution of the code by the hardware device, each instruction is logically tagged with the written domain identifier, wherein access to the resources by the instruction is determined based a comparison of the domain identifier of the instruction with the permissions. 2. The method of claim 1 wherein separate pieces of code to be executed on the processor core are assigned to different isolated domains. 3. The method of claim 1 wherein the trusted domain is able to access all of the resources. 4. The method of claim 1 wherein each of the isolated domains is only able to access the resources associated with that isolated domain. 5. The method of claim 3 wherein code in the trusted domain performs the configuring of the isolated domains, the configuring of the permissions, the assigning of the isolated domain to the code, and the writing of the unique domain identifier of the assigned isolated domain to the hardware device. 6. The method of claim 1 wherein the instruction comprises a request to access a physical address, wherein the method further comprises: comparing the domain identifier of the instruction with the permissions of the physical address in the instruction; and allowing access to the resource at the physical address in the instruction, if the domain identifier of the instruction has permission. 7. The method of claim 6 further comprising: blocking access to the resource at the physical address in the instruction, if the domain identifier of the instruction does not have permission. 8. The method of claim 1 wherein an execution engine of the hardware device is executing code in a current isolated domain, wherein the method further comprises: detecting an asynchronous event by the hardware device, the asynchronous event tagged with a domain identifier; comparing the domain identifier of the asynchronous event with a domain identifier of a current isolated domain; hiding the asynchronous event from the current isolated domain, if the domain identifier of the asynchronous event does not match the domain identifier of the current isolated domain; generating a transition request to transfer the execution engine from the current isolated domain to a target isolated domain associated with the domain identifier of the asynchronous event; and after the execution engine transitions to the target isolated domain, showing the asynchronous event in the target isolated domain. 9. The method of claim 8 wherein the execution engine transitions to the target isolated domain by: running clean up code in the current isolated domain, wherein the clean up code hides the resources associated with the current isolated domain; and running set up code in the target isolated domain, wherein the set up code enables resources associated with the target isolated domain. 10. The method of claim 9 wherein code in the trusted domain transitions the execution engine to the target isolated domain by: disabling the resources associated with the current isolated domain; and enabling the resources associated with the target isolated domain. 11. A system, comprising: a plurality of resources, each resource accessible at a physical address; a hardware device for executing code; and a trusted domain and a plurality of isolated domains, wherein each of the isolated domains is assigned a unique domain identifier, wherein one or more of the resources are associated with each of the isolated domains, wherein the associations are stored as permissions to access the physical addresses of the resources, wherein the code to be executed by the hardware device is assigned to one of the isolated domains, wherein the unique domain identifier for the assigned isolated domain is written to the hardware device, wherein during execution of the code by the hardware device, each instruction is logically tagged with the written domain identifier, wherein access to the resources are determined based a comparison of the domain identifier of the instruction with the permissions. 12. The system of claim 1 1 , wherein the trusted domain is able to access all of the resources. 13. The system of claim 11 , wherein each of the isolated domains is only able to access the resources associated with that isolated domain. 14. The system of claim 11 , wherein code in the trusted domain performs the configuring of the isolated domains, the configuring of the permissions, the assigning of the isolated domain to the code, and the writing of the unique domain identifier of the assigned isolated domain to the hardware device. 15. The system of claim 11 , wherein the instruction comprises a request to access a physical address, wherein during execution of the code, the hardware device compares the permissions of the physical address in the instruction and allows access to the resource at the physical address in the instruction if the domain identifier of the instruction has permission. 16. The sytem of claim 15, wherein access to the resource at the physical address in the instruction is blocked if the domain identifier of the instruction does not have permission. 17. The system of claim 11 , wherein an execution engine of the hardware device is executing code in a current isolated domain, wherein the hardware device: detects an asynchronous event tagged with a domain identifier, compares the domain identifier of the asynchronous event with a domain identifier of the current isolated domain, hides the asynchronous event from the current isolated domain, if the domain identifier of the asynchronous event does not match the domain identifier of the current isolated domain, generates a transition request to transfer the execution engine from the current isolated domain to a target isolated domain associated with the domain identifier of the asynchronous event, and after the execution engine transitions to the target isolated domain, showing the asynchronous event in the target isolated domain. 18. The system of claim 17, wherein the execution engine transitions to the target isolated domain by running a clean up code in the current isolated domain, wherein the clean up code hides the resources associated with the current isolated domain, and running a set up code in the target isolated domain, wherein the set up code enables resources associated with the target isolated domain. 19. The system of claim 17, wherein code in the trusted domain transitions the execution engine to the target isolated domain by disabling the resources associated with the current isolated domain and enabling the resources associated with the target isolated domain. |
SECURITY FOR CODES RUNNING IN NON-TRUSTED DOMAINSIN A PROCESSOR CORECROSS-REFERENCE TO RELATED APPLICATIONS[001] This application claims the benefit of U.S. Utility Patent Application Serial No. 12/026,840, filed February 6, 2008, and U.S. Provisional Patent Application Serial No. 60/889,086, filed February 9, 2007, assigned to the assignee of the present application. The disclosures of the above applications are incorporated herein by reference in its entirety.BACKGROUND[002] Many consumer products, such as mobile phones, set top boxes, personal digital assistants (PDA), and other systems running an operating system, are implemented with one or more processor cores. To secure a piece of code on the system, the processes that can access the code must be controlled. One approach is to partition a core into a trusted zone and a non- trusted zone. Code in the trusted zone can access all of the system resources. Code in the non-trusted zone has limited access to the system resources, as managed by code in the trusted zone. Two separate pieces of code in the non- trusted zone have the same level of permissions for access to the resources. However, it may be desirable to prevent access between the codes in the non- trusted zone. For example, an electronic wallet application and a digital rights management application may both run in the non-trusted zone. To maintain the integrity of each piece of code, access by the other needs to be controlled or prevented. A common approach is to run each piece of code in different cores. This approach, however, requires extra hardware.[003] Further, system resource access permissions are typically defined based on the virtual address space for the resources. Once permission for a piece of code is verified, the virtual address is translated to the physical address via a look-up table (LUT). However, this security mechanism is software based and may be bypassed or corrupted by a variety of means, including the direct use of the physical address of a resource directly, hence bypassing the virtual address translation. Thus, it may be difficult to prove the level of security provided by software based mechanisms.[004] Accordingly, it would be desirable to provide a method and system for providing security for codes running in non-trusted domains in a processor core.BRIEF SUMMARY OF THE INVENTION[005] A method and apparatus of the invention provide security within a processor core by configuring a trusted domain and a plurality of isolated domains. Each isolated domain is assigned a unique domain identifier. One or more resources are associated with each of the isolated domains. The associations are stored as permissions to access the physical addresses of the resources. A code to be executed by a hardware device is assigned to one of the isolated domains, and the unique domain identifier for the assigned isolated domain is written to the hardware device. When the hardware device executes the code, each instruction is logically tagged with the domain identifier written to the hardware device. The instruction is identifiable as a request to access a physical address of a resource. The hardware device compares the domain identifier of the instruction with the permissions of the physical address in the instruction. If the domain identifier of the instruction has permission to access this physical address, then access to the resource at the physical address is allowed. Access to the resource is otherwise blocked. In this manner, codes assigned to different isolated domains can run independently within the same processor core without interference from each other. Further, since the permissions are configured based on the physical addresses of the resources, concerns related to software-based security mechanisms are not relevant.BRIEF DESCRIPTION OF THE DRAWINGS[006] Figure 1 illustrates an exemplary embodiment of multiple isolated domains in a processor core. [007] Figure 2 is a block diagram of a processor core architecture in which embodiments of the invention may be implemented.[008] Figure 3 is a flowchart illustrating an exemplary embodiment of the creation of isolated domains in a processor core.[009] Figure 4 is a flowchart illustrating an exemplary embodiment of the use of the domain identifier.[010] Figure 5 is a flowchart illustrating an exemplary embodiment of the use of the domain identifier for asynchronous events.DETAILED DESCRIPTION[011] Embodiments of the invention relates to a method and apparatus for providing security for codes running in non-trusted domains of a processor core. The following description is presented to enable one of ordinary skill in the art to make and use the invention and is provided in the context of a patent application and its requirements. Various modifications to the embodiments and the generic principles and features described herein will be readily apparent to those skilled in the art. Thus, the invention is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features described herein.[012] The invention will be described in the context of particular methods having certain steps. However, the method operates effectively for other methods having different and/or additional steps not inconsistent with the invention.[013] Figure 1 illustrates an exemplary embodiment of multiple isolated domains in a processor core. As illustrated in Figure 1 , a processor core may be logically partitioned into a plurality of domains. The processor core is described in more detail below with reference to Figure 2. A "domain", as used in this specification, is a set of system resources (such as peripherals, memory space, etc.) which exist as a group. Any or all of these resources may be shared or private. Resources are private if they are accessible only to one domain. Resources are shared if they are accessible to more than one domain. Resources are accessible at their physical addresses.[014] The domains may include a trusted domain 101 and a plurality of non- trusted domains 102. The non-trusted domains 102 may include a main domain 103 and a plurality of isolated domains 104-106. A "trusted domain" is a domain which is privileged and able to configure other domains. A trusted domain 101 is able to access the resources of the processor core allocated to the trusted domain and the non-trusted domains. The trusted domain 101 includes code 107 for configuring the non-trusted domains 103-106 and for managing communications between codes in the non-trusted domains 103-106.[015] The "main domain" 103 is a primary non-trusted domain in the processor core. The operating system may be run in the main domain 103. Code in the main domain 103 is not able to access resources which are private to the trusted domain 101 or any of the isolated domains 104-106, but is able to access the shared resources. The "isolated domains" 104-106 are non-trusted domains that have at least some private resources. There may be multiple such isolated domains 104-106, each with its own resources. The isolated domains 104-106 are only able to access their own private and shared resources, as described below. Each of the non-trusted domains 102 is assigned a unique domain identifier.[016] Figure 2 is a block diagram of a processor core in which the invention may be implemented. The core 200 includes a hardware device 201 with an execution engine 202 for executing code. The hardware device 201 can be of any type, such as a processor, a memory controller, a universal asynchronous receiver/transmitter (UART) device, etc. When the execution engine 202 executes code, the instructions are placed in an execution pipeline 203. One or more caches 204 can be used to manage the execution of the instructions. The hardware device 201 and the cache 204 are coupled to a system bus 205. Coupled to the system bus 205 are resources, which can include memory 206 and one or more I/O devices 207. The hardware device 201 can access the resources 206-207 at their respective physical addresses.[017] Figure 3 is a flowchart illustrating an exemplary embodiment of the creation of isolated domains in a processor core. Referring to both Figures 2 and 3, when the core 200 is booted, code 107 in the trusted domain 101 configures a plurality of isolated domains 104-106. Each isolated domain is assigned a unique domain identifier (step 301). One or more resources 206-207 are associated with each isolated domain. The associations are stored as permissions to access the physical addresses of the resources 206-207 (step 302). When a hardware device 201 is configured, the code to be executed by the hardware device 201 is assigned to one of the isolated domains 104-106 (step 303). The domain identifier for the assigned isolated domain is then written to the hardware device 201 (step 304).[018] Figure 4 is a flowchart illustrating an exemplary embodiment of the use of the domain identifier. When the execution engine 202 executes code in an isolated domain, each instruction is logically tagged with the domain identifier of the isolated domain written to the hardware device 201 (step 401). Logically, the domain identifier is being associated with each instruction in the execution pipeline 204, and the operations associated with this instruction have the associated domain identifier. In the exemplary embodiment, the domain identifier comprises additional bits sent on the system bus 205 along with the instruction.[019] During execution of the code, the hardware device 201 compares the domain identifier of the instruction with the permissions for the resources 206- 207 (step 402). The instruction is identifiable as a request for access to a physical address of a resource. Thus, the hardware device 201 compares the permissions of the physical address in the instruction with the domain identifier of the instruction (step 403). If the domain identifier of the instruction has permission to access the physical address, then access to the resource at the physical address is allowed (step 404). Otherwise, access is blocked (step 405), and a "memory out of range" error is returned. The hardware device 201 can use the assigned domain identifier to check the permissions each time a resource access is attempted or at any time during the execution of the code.[020] For example, assume that processor core 200 includes resources, RESOURCE1 and RESOURCE2 with physical addresses ADD1 and ADD2. During configuration of the core 200, two isolated domains, DOMAIN1 and DOMAIN2 are configured and assigned unique domain identifiers (step 301). Both RESOURCE1 and RESOURCE2 are associated with DOMAIN1 , while only RESOURCE1 is associated with DOMAIN2. The permissions for ADD1 are stored as giving access to DOMAIN1 and DOMAIN2, and the permissions for ADD2 are stored as giving access to DOMAIN1 (step 302).[021] Assume that two applications, APP1 and APP2 are configured to run on PROCESSOR1 and PROCESSOR2, respectively. During the configuration of the applications, APP1 is assigned to DOMAIN1 , and APP2 is assigned to DOMAIN2 (step 303). DOMAIN1 is then written to PROCESSORS and DOMAIN2 is written to PROCESSOR2 (step 304).[022] When PROCESSOR1 executes APP1 , each instruction is logically tagged with DOMAIN1 (step 401 ). Assume that a first instruction of APP1 includes a request to access ADD1. PROCESSOR1 checks the permissions of ADD1 and determines that DOMAIN1 has been given access (steps 402-403). The first instruction is thus allowed access to the resource at ADD1 (step 404). Assume that a second instruction of APP1 includes a request to access ADD2. PROCESSOR1 checks the permissions of ADD2 and determines that DOMAIN1 has been given access (steps 402-403). The second instruction is thus allowed to access the resource at ADD2 (step 404).[023] When PROCESSOR2 executes APP2, each instruction is logically tagged with DOMAIN2 (step 401 ). Assume that a first instruction of APP2 includes a request to access ADD1. PR0CESS0R2 checks the permissions of ADD1 and determines that DOMAIN2 has been given access (steps 402-403). The first instruction is thus allowed access to the resource at ADD1 (step 404). Assume that a second instruction of APP2 includes a request to access ADD2. PROCESSOR2 checks the permissions of ADD2 and determines that DOMAIN2 has not been given access (steps 402-403). The second instruction is thus blocked from accessing the resource at ADD2 (step 405). A "memory out of range" message is returned.[024] In this manner, APP1 and APP2 execute in separate isolated domains and each are only able to access their own private or shared resources. Neither is able to access resources which are private to the trusted domain 101 or any of the other non-trusted domains. Neither APP1 nor APP2 need to be modified. If APP1 and APP2 is required to communicate, this communication is managed through the code 107 in the trusted domain 101.[025] Occasionally, the checking of the domain identifier cannot be performed in real time, such as for asynchronous events. Accesses from asynchronous events may not be related to the current isolated domain executing at an execution engine. The asynchronous event can be either from an external change, e.g., an interrupt, or from an action which took place some time previous, e.g. DMA completion at which time there was a different current domain. An isolated domain in which the event should be handled is the target isolated domain, which is identified by the domain identifier tagged on the asynchronous event. The target isolated domain can be the current isolated domain or a isolated domain different from the current isolated domain.[026] Figure 5 is a flowchart illustrating an exemplary embodiment of the use of the domain identifier for asynchronous events. When a hardware device 201 detects an asynchronous event (step 501), the hardware device 201 compares the domain identifier of the event with the domain identifier of the current isolated domain executing on an execution engine 202 (step 502). If they match (step 503), then the event is allowed to occur in the current isolated domain (step 504). If they do not match, then the event is hidden in the current isolated domain (step 505). The hardware device 201 then generates a transition request to the trusted domain 101 to transfer the asynchronous event to the target isolated domain (step 506). Code in the trusted domain 101 transitions the execution engine 202 to the target isolated domain (step 507). The event is then shown in the target isolated domain (step 508), in which the event is handled. The hardware device 201 compares the permissions of the physical addresses of the resources 206-207 with the domain identifier of the event to determine which resources the event can access, as described above with reference to Figure 4.[027] In the exemplary embodiment, the transition to the target isolated domain comprises a series of operations carried out between two instructions with different domain identifiers on the same execution engine or set of engines. The transition code can be implemented in any one of a number of ways. For example, clean up code is run in the current isolated domain, followed by a run of set up code in the target isolated domain. The clean up code hides the current isolated domain's resources. Once the transition to the target isolated domain occurs, the set up code enables the target isolated domain's resources. For another example, a single code is run in the trusted domain 101 to disable the resources of the current isolated domain and to enable the resources of the target isolated domain.[028] In the exemplary embodiment, the transition code contains no operational code. The transition code only performs the transition from a current isolated domain to a target isolated domain. The operation of any instruction is then handled in the target isolated domain, not by the transition code.[029] For example, assume that a UART interrupt is configured to be taken in one isolated domain, DOMAIN1. Assume also that another isolated domain, DOMAIN2, is currently running on the execution engine 202. When the hardware device 201 detects the interrupt event (step 501), the hardware device 201 compares the domain identifier of the interrupt event, DOMAIN1 , with the domain identifier of the currently running isolated domain, DOMAIN2 (step 502). Since they do not match (step 503), the interrupt event is hidden in DOMAIN2 (step 505). The hardware device 201 generates a transition request to the trusted domain 101 to transfer the interrupt event to DOMAIN1 (step 506). Code in the trusted domain 101 transitions the execution engine 202 from DOMAIN2 to DOMAIN1 (step 507). The interrupt event is then shown in DOMAIN1 , which is then handled by the execution engine 202 (step 508). The hardware device 201 determines the permissions to access the physical addresses of the resources 206-207, as described above with reference to Figure 4.[030] In some cases, it may be more expedient to place a resource "above" the point where the domain identifier tag is added to an instruction. For example, an initial design may wish to execute all instructions at the system-on- chip (SOC) level, thus avoiding modification of the core 200. Examples of such resources include caches and memory management unit/translation lookaside buffer (MMU/TLB), typically used in virtual address translation. If the execution engine 202 is executing one piece of code at a time, a register can be associated with the hardware device 201 for storing the domain identifier assigned to the code. The value in the register is logically attached to a group of instructions executed by the execution engine 202, rather that tagging each individual instruction. When the execution engine 202 transitions to a different isolated domain, the value in the register is changed to the domain identifier of that isolated domain.[031] If one or more of the caches in the processor core 200 are above the level where the domain identifier is added to an instruction, then when the execution engine 202 transitions to a different isolated domain, the cache is flushed of content belonging to the previously executing isolated domain. Flushing of the cache is required since access to the cache is not checked at this level. The flushing may be implemented in any number of ways, for example: defining only one isolated domain as cacheable; tagging cache contents to indicate which isolated domain the content belongs to, and the cache is selectively flushed for contents of a particular isolated domain; or completely flushing the cache.[032] Similar to the cache, the MMU/TLB can exist above the point where the domain identifier is added to an instruction. Direct modification to the MMU/TLB would be a secure operation and the address tables should either be secure or in the correct domain. As the domain identifier is used to determine permissions based on physical addresses rather than virtual addresses, there is no security breach if a TLB is "corrupted" to point to an undesirable address.[033] Although the exemplary embodiment is described above as a mechanism for securing access between codes in non-trusted domains for a processor core, the concept of multiple domains can be expanded to be an identifier for a task within the overall system. For example, the task may be to allocate bus bandwidth or processing time. This is normally done at the operating system level, but in this alternative embodiment, domains are used where there is more than one operating system running on the system. For example, a single digital signal processor (DSP) is used to perform multiple tasks, such as processing of multimedia and modem functions. Each task is assigned a different operating system or real-time operating system (RTOS), and is not allowed to occupy more than its allotted space on the system. Domains can be used at all levels of the system, such as allowing different fractions of a shared cache to be allocated to different tasks, different amount of bus bandwidth, etc. The domain identifier can also be used for prioritization of the tasks with the system.[034] A method and apparatus for providing security for codes running in non-trusted domains in a processor core have been disclosed. The method and apparatus configure a processor core to include a trusted domain and a plurality of isolated domains. Each of the isolated domains is assigned a unique domain identifier. One or more resources are associated with each of the isolated domains. The associations are stored as permissions to access the physical addresses of the resources. A code to be executed by a hardware device is associated with one of the isolated domains, and the unique domain identifier for the assigned isolated domain is written to the hardware device. When the hardware device executes the code, each instruction is logically tagged with the domain identifier written to the hardware device. The instruction is identifiable as a request to access a physical address of a resource. The hardware device compares the domain identifier of the instruction with the permissions of the physical address in the instruction. If the domain identifier of the instruction has permission to access this physical address, then access to the resource at the physical address is allowed. Access to the resource is otherwise blocked. In this manner, codes assigned to different isolated domains can run independently within the same processor core without interference from each other. Further, since the permissions are configured based on the physical addresses of the resources, concerns related to software-based security mechanisms are not relevant.[035] The invention has been described in accordance with the embodiments shown, and one of ordinary skill in the art will readily recognize that there could be variations to the embodiments, and any variations would be within the spirit and scope of the invention. For example, the invention can be implemented using hardware, software, a computer readable medium containing program instructions, or a combination thereof. Software written according to the invention is to be either stored in some form of computer-readable medium such as memory CD-ROM, or is to be transmitted over a network, and is to be executed by a processor. Consequently, a computer-readable medium is intended to include a computer readable signal, which may be, for example, transmitted over a network. Accordingly, many modifications may be made by one of ordinary skill in the art without departing from the spirit and scope of the appended claims. |
Integrated circuits (ICs) having bus-based programmable interconnect structures are provided. An IC includes substantially similar logic blocks and a programmable interconnect structure programmably interconnecting the logic blocks. The programmable interconnect structure includes bus structures and programmable switching structures programmably interconnecting the bus structures. Each bus structure includes N data lines, where N is an integer greater than one, and N commonly controlled storage elements (e.g., latches) for storing data on the N data lines. In some embodiments, at least one of the bus structures includes handshake logic, including a C-element coupled to drive a ready line, to receive an acknowledge line, and to provide a control signal to each of the N storage elements in the bus structure. In some embodiments, each of the programmable switching structures includes N M-input data multiplexers, an M-input ready multiplexer, and an M-output acknowledge demultiplexer, M being an integer greater than one. |
What is claimed is: 1. An integrated circuit (IC), comprising: a plurality of substantially similar logic blocks; anda programmable interconnect structure programmably interconnecting the logic blocks one to another,wherein the programmable interconnect structure comprises: a plurality of bus structures each comprising N data lines, N being an integer greater than one, and N commonly controlled storage elements for storing data on the N data lines; anda plurality of programmable switching structures programmably interconnecting the bus structures to one another and to the logic blocks.2. The IC of claim 1, wherein the storage elements comprise latches.3. The IC of claim 1, wherein at least one of the bus structures comprises handshake logic.4. The IC of claim 3, wherein each bus structure further comprises: a C-element coupled to drive a ready line, to receive an acknowledge line, and to provide a control signal to each of the N storage elements in the bus structure.5. The IC of claim 4, wherein each programmable switching structure comprises: N M-input data multiplexers each coupled to drive a data input of a corresponding storage element in a corresponding bus structure, M being an integer greater than one;an M-input ready multiplexer coupled to drive a ready input of the C-element of the corresponding bus structure; andan M-output acknowledge demultiplexer driven by an acknowledge output of the C-element of the corresponding bus structure.6. The IC of claim 5, wherein each programmable switching structure further comprises a plurality of memory cells coupled to select inputs of the data multiplexers, the ready multiplexer, and the acknowledge demultiplexer.7. An integrated circuit (IC), comprising: an array of substantially similar tiles, each tile including: a logic block; anda programmable routing structure programmably interconnecting the logic block to one or more logic blocks in other tiles,wherein in each of the tiles the programmable routing structure comprises: a plurality of bus structures each comprising N data lines, N being an integer greater than one, and N commonly controlled storage elements for storing data on the N data lines; anda plurality of programmable switching structures programmably interconnecting the bus structures to one another and to the logic blocks.8. The IC of claim 7, wherein the storage elements comprise latches.9. The IC of claim 7, wherein at least one of the bus structures in each tile comprises handshake logic.10. The IC of claim 7, wherein each bus structure further comprises: a C-element coupled to drive a ready line, to receive an acknowledge line, and to provide a control signal to each of the N storage elements in the bus structure.11. The IC of claim 10, wherein each programmable switching structure comprises: N M-input data multiplexers each coupled to drive a data input of a corresponding storage element in a corresponding bus structure, M being an integer greater than one;an M-input ready multiplexer coupled to drive a ready input of the C-element of the corresponding bus structure; andan M-output acknowledge demultiplexer driven by an acknowledge output of the C-element of the corresponding bus structure.12. The IC of claim 7, wherein: a first programmable switching structure in a first tile couples a logic block in the first tile to a first bus structure in the first tile, the first bus structure being coupled to a second tile in the array.13. The IC of claim 12, wherein a second programmable switching structure in the second tile couples the first bus structure to a logic block in the second tile.14. The IC of claim 12, wherein a second programmable switching structure in the second tile couples the first bus structure to a second bus structure in the second tile.15. An integrated circuit (IC), comprising: a plurality of logic blocks; anda programmable interconnect structure programmably interconnecting the logic blocks one to another,wherein the programmable interconnect structure comprises: a plurality of bus structures, each bus structure comprising: N data lines;N latches each coupled to a corresponding one of the data lines; exactly one ready line; exactly one acknowledge line; and exactly one C-element coupled to the ready line, the acknowledge line, and an enable input of each of the latches, N being an integer greater than one; and a plurality of programmable switching structures programmably interconnecting the bus structures to one another and to the logic blocks.16. The IC of claim 15, wherein each of the programmable switching structures comprises: N M-input data multiplexers each coupled to drive a data input of a corresponding one of the latches, M being an integer greater than one;an M-input ready multiplexer coupled to drive the ready line; andan M-output acknowledge demultiplexer driven by the acknowledge line.17. The IC of claim 16, wherein each of the programmable switching structures further comprises at least one memory cell, wherein each of the data multiplexers, the ready multiplexer, and the acknowledge demultiplexer has at least one select input coupled to an output of the at least one memory cell. |
BACKGROUNDProgrammable integrated circuits (ICs) are a well-known type of IC that can be programmed to perform specified logic functions. An exemplary type of programmable IC, the field programmable gate array (FPGA), typically includes an array of programmable tiles. These programmable tiles can include, for example, input/output blocks (IOBs), configurable logic blocks (CLBs), dedicated random access memory blocks (BRAM), multipliers, digital signal processing blocks (DSPs), processors, clock managers, delay lock loops (DLLs), and so forth.Each programmable tile typically includes both programmable interconnect and programmable logic. The programmable interconnect typically includes a large number of interconnect lines of varying lengths interconnected by programmable interconnect points (PIPs). The programmable logic implements the logic of a user design using programmable elements that can include, for example, function generators, registers, arithmetic logic, and so forth.The programmable interconnect and programmable logic are typically programmed by loading a stream of configuration data into internal configuration memory cells that define how the programmable elements are configured. The configuration data can be read from memory (e.g., from an external PROM) or written into the FPGA by an external device. The collective states of the individual memory cells then determine the function of the FPGA.Another type of programmable IC is the Complex Programmable Logic Device, or CPLD. A CPLD includes two or more “function blocks” connected together and to input/output (I/O) resources by an interconnect switch matrix. Each function block of the CPLD includes a two-level AND/OR structure similar to those used in Programmable Logic Arrays (PLAs) and Programmable Array Logic (PAL) devices. In CPLDs, configuration data is typically stored on-chip in non-volatile memory. In some CPLDs, configuration data is stored on-chip in non-volatile memory, then downloaded to volatile memory as part of an initial configuration (programming) sequence.For all of these programmable ICs, the functionality of the device is controlled by data bits provided to the device for that purpose. The data bits can be stored in volatile memory (e.g., static memory cells, as in FPGAs and some CPLDs), in non-volatile memory (e.g., FLASH memory, as in some CPLDs), or in any other type of memory cell.Other programmable ICs are programmed by applying a processing layer, such as a metal layer, that programmably interconnects the various elements on the device. These ICs are known as mask programmable devices. Programmable ICs can also be implemented in other ways, e.g., using fuse or antifuse technology. The terms “programmable integrated circuit” and “programmable IC” include but are not limited to these exemplary devices, as well as encompassing devices that are only partially programmable. For example, one type of programmable IC includes a combination of hard-coded transistor logic and a programmable switch fabric that programmably interconnects the hard-coded transistor logic.Traditionally, programmable ICs include one or more extensive dedicated clock networks, as well as clock management blocks that provide clock signals for distribution to all portions of the IC via the dedicated clock networks. These clock management blocks can be quite complicated, encompassing, for example, digital locked loops (DLLs), phase locked loops (PLLs), digital clock managers (DCMs), and so forth. For example, the Virtex®-4 series of FPGAs from Xilinx, Inc. includes up to 20 DCMs, each providing individual clock deskewing, frequency synthesis, phase shifting, and/or dynamic reconfiguration for a portion of the IC. Thus, a significant amount of design and testing time is required to provide these features in the device, and their use also requires time and effort on the part of the system designer. Additionally, because a global clock signal may be needed at virtually any position in a programmable IC, a global clock network is very extensive and consumes large amounts of power when in use.A large IC design typically includes a large number of “race conditions”, where two or more signals are “racing” each other to a given destination, such as the input terminals of a logic block. Typically one of these signals is a clock signal, which must reach the destination within a certain window within which the data being provided to the destination is valid. Thus, the well-known timing requirements known as the “setup time” for data (the amount of time by which the data signal must precede the active edge of the clock signal at the input terminals of the logic block) and the “hold time” for the data (the amount of time the data signal must remain at the data input terminal after the arrival of the active edge of the clock signal) are vital to the success of a clocked design, and must be met for every clocked element, or the logic cannot be expected to operate properly.One of the biggest challenges in providing clock services for a large programmable IC is the problem of skew. Clock and data signals distributed over a large area are naturally delayed by varying amounts, depending upon their origins and destinations as well as the nature of the network paths through which they are distributed. Therefore, clock signals are often skewed one from another, and from the related data signals. Yet, the setup and hold time requirements must be met in every instance to guarantee reliable operation of a user design implemented in the programmable IC. Therefore, it is clear that the design of reliable clock networks for a programmable IC containing potentially a hundred thousand flip-flops or other clock elements may consume a large amount of engineering resources and may adversely impact the design cycle of the programmable IC.SUMMARYThe invention provides integrated circuits (ICs) having bus-based programmable interconnect structures. The IC includes a number of substantially similar logic blocks and a programmable interconnect structure programmably interconnecting the logic blocks. The programmable interconnect structure includes a number of bus structures and a number of programmable switching structures programmably interconnecting the bus structures. Each bus structure includes N data lines, where N is an integer greater than one, and N commonly controlled storage elements (e.g., latches) for storing data on the N data lines.In some embodiments, at least one of the bus structures includes handshake logic, including a C-element coupled to drive a ready line, to receive an acknowledge line, and to provide a control signal to each of the N storage elements in the bus structure.In some embodiments, each of the programmable switching structures includes N M-input data multiplexers, an M-input ready multiplexer, and an M-output acknowledge demultiplexer, M being an integer greater than one. Each data multiplexer is coupled to drive a data input of a corresponding latch, the ready multiplexer is coupled to drive the ready line, and the acknowledge demultiplexer is driven by the acknowledge line.BRIEF DESCRIPTION OF THE DRAWINGSThe present invention is illustrated by way of example, and not by way of limitation, in the following figures.FIG. 1 is a block diagram showing an exemplary integrated circuit including an array of logic blocks interconnected by a pipelined interconnect structure.FIG. 2 illustrates a first exemplary programmable routing structure operating in a 2-phase handshake mode that can be used, for example, in the IC of FIG. 1.FIG. 3 illustrates a known C-element that can be used in handshake logic.FIG. 4 illustrates in tabular form the functionality of the C-element of FIG. 3.FIG. 5 illustrates in tabular form the functionality of the C-element of FIG. 2.FIG. 6 is a waveform diagram illustrating the functionality of 2-phase handshake logic such as that shown in FIG. 2.FIG. 7 illustrates a first known multiplexer structure using CMOS transmission gates.FIG. 8 illustrates a second known multiplexer structure using N-channel transistors.FIG. 9 illustrates how the exemplary routing structure of FIG. 2 can be modified to operate in a 4-phase handshake mode that can be used, for example, in the IC of FIG. 1.FIG. 10 is a waveform diagram illustrating the functionality of 4-phase handshake logic such as that shown in FIG. 9.FIG. 11 illustrates a second exemplary programmable routing structure operating in a 2-phase handshake mode that can be used, for example, in the IC of FIG. 1.FIG. 12 illustrates how the performance of the embodiment of FIG. 11 can be improved by using multiple oxide thicknesses for the transistors.FIG. 13 illustrates a known circuit that can be used, for example, to implement the logical AND gates of FIG. 12.FIG. 14 illustrates a first improved circuit that can be used, for example, to implement the logical AND gates of FIG. 12.FIG. 15 illustrates a second improved circuit that can be used, for example, to implement the logical AND gates of FIG. 12.FIG. 16 illustrates how the exemplary routing structure of FIG. 11 can be modified to operate in a 4-phase handshake mode that can be used, for example, in the IC of FIG. 1.FIG. 17 illustrates a third exemplary programmable routing structure operating in a 2-phase handshake mode that can be used, for example, in the IC of FIG. 1.FIG. 18 illustrates how the exemplary routing structure of FIG. 17 can be modified to operate in a 4-phase handshake mode and to include initialization circuitry for the routing structure.FIG. 19 is a flow diagram illustrating a method of initializing a routing structure in an IC that might or might not be programmable.FIG. 20 is a flow diagram illustrating a method of initializing a routing structure in a programmable IC.FIG. 21 is a waveform diagram illustrating how the methods of FIGS. 19 and 20 can be applied to the circuitry of FIG. 18.DETAILED DESCRIPTIONWhile the specification concludes with claims defining some features of the invention that are regarded as novel, it is believed that the invention will be better understood from a consideration of the description in conjunction with the drawings. As required, detailed embodiments of the present invention are disclosed herein. However, it is to be understood that the disclosed embodiments are merely exemplary of the invention, which can be embodied in various forms. Therefore, specific structural and/or functional details disclosed herein are not to be interpreted as limiting, but merely as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the inventive arrangements in virtually any appropriately detailed structure. Further, the terms and phrases used herein are not intended to be limiting, but rather to provide an understandable description of the invention.For example, the present invention is applicable to a variety of integrated circuits (ICs). An appreciation of the present invention is presented by way of specific examples utilizing programmable ICs. However, the present invention is not limited by these examples, and may be applied to any applicable IC and/or circuit structure.FIG. 1 is a block diagram showing an exemplary integrated circuit including an array of substantially similar logic blocks interconnected by a pipelined interconnect structure. The interconnect structure in the illustrated embodiment includes an array of substantially similar programmable routing structures 101, with each of the routing structures 101 being coupled to an associated logic block 102 in the array of logic blocks. Looked at another way, the IC of FIG. 1 includes an array of substantially similar tiles 100a-100d, where each tile includes a programmable routing structure 101 and an associated logic block 102.In the present specification, the term “substantially similar” is understood to mean similar to the extent that each substantially similar element performs the same functions in the same way. For example, substantially similar logic blocks include the same internal elements, e.g., lookup table, storage elements, and so forth, have the same internal connections between these elements, and are programmed in the same fashion. Similarly, substantially similar programmable routing structures couple together interconnect lines having the same logical relationships, are programmed in the same fashion, and so forth. Substantially similar elements may have a single layout, stepped and repeated, but this is not always the case. The addition of relatively small amounts of extra logic (e.g., buffers, capacitors, etc.) to one or more logic blocks and/or programmable routing structures do not prevent the logic blocks, tiles, and/or programmable routing structures from being substantially similar, nor do changes in layout, transistor sizes, and so forth.In the illustrated embodiment, each logic block 102 includes at least one storage element 103 (e.g., flip-flop and/or latch). Such logic blocks are well known, e.g., in the Virtex™ field programmable gate arrays (FPGAs) from Xilinx, Inc. Typically, one storage element is coupled to drive an output of the logic block, e.g., directly or through an output multiplexer and/or buffer. Other storage elements may be included in the logic block as well, to provide additional pipelining functions. In the embodiment of FIG. 1, each logic block includes two storage elements, with one being positioned at the output of the logic block. In some embodiments (not shown), each logic block includes more than one output driven by a storage element. The output of each logic block may be a single bit, or a multi-bit bus.Each logic block 102 is coupled to an associated programmable routing structure 101. The routing structure 101 is also pipelined, including a storage element 103 at each output. Thus, the routing structures and logic blocks can work together to create a fully pipelined design. Such pipelining may overcome a limitation of known programmable IC architectures, in which long interconnect lines sometimes limit the speed of operation for a circuit implemented in the IC. By pipelining the routing structures, the throughput of the overall design may be increased. In some embodiments (not shown), one or more additional outputs of routing structure 101 are not pipelined, i.e., not driven by storage elements.FIG. 1 illustrates an IC in which the outputs of each routing structure are coupled to drive either an input of another routing structure, or an input of one of the logic blocks. The output of each logic block is coupled to drive an input of a corresponding programmable routing structure. In the pictured embodiment, each routing structure is coupled to vertical interconnect lines 104, horizontal interconnect lines 105, and diagonal interconnect lines 106. However, in some embodiments some of these options (e.g., diagonal interconnect lines 106) are not provided. Note that interconnect lines 104-106 may be single lines or multi-bit busses. For example, in one embodiment each interconnect line 104-106 is an 8-bit bus, and also includes supporting signals, as is later described. Additionally, the interconnect lines in the embodiments described herein are all unidirectional. As is later described, unidirectional interconnect lines may permit a more efficient implementation of a pipelined programmable routing structure, because the overall number of routing multiplexers can be reduced relative to a bidirectional implementation.The interconnect lines shown in FIG. 1 are all “singles”, that is, they connect a routing structure to another routing structure in an adjacent tile, either vertically adjacent (interconnect lines 104), horizontally adjacent (interconnect lines 105), or diagonally adjacent (interconnect lines 106). As is well known, interconnect lines in this type of IC architecture may include “doubles”, which connect to a routing structure in a tile two tiles away, “quads”, which connect to a routing structure in a tile four tiles away, and/or interconnect lines of other lengths. For clarity, interconnect lines other than singles are omitted from FIG. 1. However, some embodiments may include such interconnect lines. In some embodiments, such as those that are now described, it may be desirable not to include interconnect lines having too large a delay. One such embodiment includes singles and doubles, with no longer interconnect lines being provided.In some embodiments, storage elements are not included for every interconnect line in every routing structure. For example, storage elements can be included in every tile for doubles, and only every other tile for singles. In other embodiments, every routing structure includes a storage element for each interconnect line.Including asynchronous storage elements (e.g., latches) in the interconnect structure enables the use of asynchronous routing. In some embodiments, both the interconnect structure and the logic blocks are implemented asynchronously. Thus, the high level of design complexity caused by the problem of clock skew in a large IC is overcome. Additionally, the elimination of large global clock networks from the IC may substantially reduce the amount of power consumed by the IC when in operation.FIG. 2 illustrates an exemplary programmable routing structure that can be used, for example, in the IC of FIG. 1 when the IC utilizes an asynchronous design. The embodiment of FIG. 2, as well as the other embodiments of the programmable routing structure shown in the other figures, is preferably used with an asynchronous logic block having a storage element at the output. Additional storage elements may also be optionally included in the logic block to provide further pipelining.In FIG. 2 and the other illustrated embodiments, the interconnect structure is bus-based. In other words, the logic blocks and the programmable routing structures are interconnected by data lines organized as multi-bit busses coupled to multi-bit ports of the logic blocks and the programmable routing structures. For example, each arrow in FIG. 1 may be thought of as an N-bit bus, where N is an integer greater than one. Note, however, that while the pictured embodiments illustrate an interconnect structure based on multi-bit busses, this need not be the case. It will be clear to those of skill in the relevant arts that the illustrated embodiments may be readily adapted to apply to single-bit interconnect lines. In other words, in some embodiments, N may have a value of one.Note also that the programmable routing structure of FIG. 2 includes the logic for a single bus, e.g., one vertical bus, one horizontal bus, or one diagonal bus in FIG. 1. Thus, each routing structure 101 of FIG. 1 includes multiple copies of the structure of FIG. 2 (e.g., nine copies as shown).The programmable routing structure of FIG. 2 includes a programmable switching structure 210 and a bus structure 215, coupled together as shown in FIG. 2. The busses of the described embodiments include handshake logic, which is well known in the relevant arts. For example, Jens Sparso has published a tutorial on the subject of asynchronous circuit design using handshake logic, entitled “Asynchronous Circuit Design—a Tutorial”, published by the Technical University of Denmark in 2006 and previously published in 2001.Bus structure 215 includes the storage elements for the data lines and control logic for the storage elements. Thus, each data line DATA_OUT(1:N)) is latched in a corresponding storage element before leaving the routing structure. In one embodiment, N is eight, i.e., the bus is an 8-bit bus. However, N can clearly have other values less than or greater than eight. In one embodiment, N is one.Briefly, when handshake logic is used, data is latched at appropriate intervals along the data path (e.g., when leaving each programmable routing structure or logic block, in the embodiment of FIG. 1). Each interconnect line or bus is accompanied by a ready line and an acknowledge line. A given latch on the interconnect line opens to receive a new value only when the handshake logic for the given latch acknowledges receipt of the previously received data, and the handshake logic for the subsequent latch on the interconnect line acknowledges receipt of the data previously sent by the given latch.To implement this logical function, handshake logic typically includes a logic structure known as a C-element. FIG. 3 shows a common implementation of a C-element. Briefly, a C-element has two inputs and an output. As long as the values of the two inputs are different, the output of the C-element does not change. When both inputs go high, the output goes high. When both inputs go low, the output goes low. This behavior is shown in tabular form in FIG. 4.The C-element implementation of FIG. 3 includes P-channel transistors 301-302, N-channel transistors 303-304, and inverters 305-306, coupled together as shown in FIG. 3. When inputs IN1 and IN2 are both high, internal node 307 is pulled low through transistors 303-304, the low value is latched by inverters 305-306, and output OUT goes high. When inputs IN1 and IN2 are both low, internal node 307 is pulled high through transistors 301-302, the high value is latched by inverters 305-306, and output OUT goes low. When inputs IN1 and IN2 have two different values, the value in the latch does not change, so output OUT does not change value.Returning now to FIG. 2, handshake circuit 220 includes a C-element 240 (including transistors 221-222, 224-225 and inverters 226-227, coupled together as shown in FIG. 2) having a ready input RDY_IN, an acknowledge input ACK_INB, and an output RDY_OUT/ACK_OUT. (In the present specification, the same reference characters are used to refer to input and/or output terminals, input and/or output ports, signal lines, and their corresponding signals.) Note that the acknowledge and ready outputs are the same for C-element 240. Since the acknowledge output enables the latches and the ready output signals that new data is ready to send, the data latches need to be faster than the ready latch (the latch in the C-element). The behavior of C-element 240 is shown in tabular form in FIG. 5.Handshake circuit 220 also includes an inverter 228. Inverter 228, in conjunction with XOR gate 253 and inverter 254, acts to enable (open) the data latches when handshake logic 220 signals readiness to receive new data (via signal ACK_OUT) and a handshake circuit in a subsequent circuit on the interconnect line signals receipt of the previously sent data (via signal ACK_IN).In the pictured embodiment, each data latch 230(1:N) includes a tristate inverter (P-channel transistors 231-232 and N-channel transistors 234-235, coupled in series between power high VDD and ground GND) driving a latch (inverters 236-237). It will be clear to those of skill in the art that other latch implementations can also be used. The latch is opened (e.g., the tristate inverter is enabled) when signal EN_DATA is high.One advantage of the data latch implementation shown in FIG. 2 is that the structure of the data latch is similar to that of the C-element. Transistors 221, 222, 224, and 225 of the C-element are similar to transistors 231, 232, 234, and 235 of the data latch, and inverters 226-227 of the C-element are similar to inverters 236-237 of the data latch. Thus, the transistors in the two structures may be given the same size, and may be laid out in the same orientations and in the same positions relative to the other transistors in the same structure. As a consequence, a data input to each data latch may be affected by the transistors in the data latch in the same or a similar manner to that in which a ready input to the C-element is affected by the transistors in the C-element.Note that the latches in this figure and the other figures herein can also include reset and/or set circuitry such as is well known in the art. For example, each latch can include a NOR or NAND gate in the loop instead of one of the inverters, with the NOR or NAND gate driven by a reset or set input. In one embodiment of C-element 240, for example, inverter 226 is replaced by a NOR gate having a reset signal as the second input.The handshake logic in bus structure 215 operates in a “2-phase mode”, which is illustrated in FIG. 6. In a 2-phase handshake mode, both rising and falling edges of the triggering input signal (either the acknowledge signal from the subsequent handshake circuit (ACK_IN) or the ready signal from the instant handshake circuit (RDY_IN)) are used to enable the transfer of new data to the data latches. The ACK_IN and RDY_IN signals can change value in either order, or simultaneously. However, in all of these situations, in 2-phase mode both rising and falling edges of the triggering input signal enable a transfer of new data to the latchesBecause of the handshake functionality in the routing structure, each data line and each bus in the routing structure has only one source and one destination. The source and destination are selected by way of programmable switching structures. Programmable switching structure 210 performs the function of the routing multiplexers in known programmable logic devices (PLDs), for example, programmably selecting one of multiple busses and routing the selected bus onward. Programmable switching structure 210 includes N multiplexers 213(1:N) for routing the data lines, a multiplexer 211 for routing a ready signal for the N-bit bus, and a demultiplexer 212 for routing an acknowledge signal for the N-bit bus. (The term “demultiplexer” is used herein to denote a multiplexer in which the data is routed from a single input signal to one of many output signals, rather than the reverse as in an equivalent multiplexer.)Multiplexers 211 and 213(1:N) and demultiplexer 212 can be implemented, for example, as shown in FIG. 7 or FIG. 8. The embodiment of FIG. 7 comprises CMOS transmission gates 710(1:M), with each transmission gate being controlled by a separate select input signal for the multiplexer/demultiplexer. Thus, only one of these select inputs can be high at any given time. For example, each select input may be controlled by a corresponding memory cell MC(1:M), where M is the number of data inputs/outputs (i.e., M is greater than one). Similarly, in the embodiment of FIG. 8, only one of the N-channel pass gates 801(1:M) can be turned on at any given time. In these embodiments, each select input may be controlled by a separate memory cell. For example, memory cells MC(1:M) may also be included in the programmable switching structure, as shown in FIG. 2. In some embodiments, when the switching structure is included in a programmable logic device (PLD), these memory cells may be configuration memory cells for the PLD. In some embodiments, decoders may be used to drive the select inputs to reduce the number of memory cells required to store the select data. In some embodiments, multi-stage multiplexers may be used. In some embodiments, M is ten. In some embodiments, M is greater than or less than ten.Because all of the multiplexers 211, 213(1:N) and demultiplexer 212 have the same number of inputs/output (i.e., M), they may all be laid out in the same way. In some embodiments, the transistors in multiplexers 211, 213(1:N) and demultiplexer 212 are all the same size as those in the counterpart structures (e.g., the N-channel transistors are all a first size, and the P-channel transistors are all a second size), and the transistors have the same orientations and placements relative to the other transistors in the same structure. This layout consistency lends itself to a space-efficient implementation, although the demultiplexer will have a relatively poor performance in this embodiment because of the high fanout on the ACK_OUT signal. However, the speed of the overall circuit is generally not determined by the delay on the acknowledge path in the interconnect structure, but by delays in the logic blocks interconnected by the interconnect structure. Therefore, this additional delay on the acknowledge path generally does not impact the overall speed of operation.In all of the embodiments illustrated herein, the interconnect lines are unidirectional. Traditionally, unidirectional interconnect lines may be regarded as being less desirable than bidirectional interconnect lines, because of their reduced flexibility. For example, the asynchronous FPGA architecture described by John Teifel and Rajit Manohar in their paper entitled “Highly Pipelined Asynchronous FPGAs,” FPGA '04 Feb. 22-24, 2004, uses bidirectional interconnect lines. However, the implementation of bidirectional interconnect lines requires a larger number of multiplexers in the programmable routing structure, to implement the change of direction for the interconnect lines. When the data multiplexers reach a certain size (e.g., M reaches a certain value in the figures herein), it is preferable to increase the number of C-elements in the structure (e.g., by providing two unidirectional interconnect lines instead of one bidirectional interconnect line) rather than increasing the number of multiplexers, as C-elements consume less area than sufficiently large multiplexers. However, some embodiments of the invention may be adapted for use with bidirectional interconnect lines.The unidirectionality of the illustrated embodiments may also increase the speed of operation for the circuit, because a reduced number of multiplexers reduces the loading on the interconnect lines. Further, the interconnect lines can be driven directly from the storage element or through a simple buffer, rather than through one or more pass gates, as in Teifel and Manohar's FPGA (see FIG. 11 of the above-referenced paper). FIGS. 2, 9, 11, 12, 16, 17, and 18 of the present document illustrate exemplary embodiments of an asynchronous programmable IC in which the storage elements drive unidirectional interconnect lines without traversing a pass gate.FIG. 9 illustrates how the exemplary routing structure of FIG. 2 can be modified to operate in a 4-phase handshake mode that can be used, for example, in the IC of FIG. 1. For ease of illustration, the same numerical labels are used in FIG. 9 as in FIG. 2 to refer to the same items. However, in alternative embodiments the items may be different. To change the handshake logic of FIG. 2 from a 2-phase mode to a 4-phase mode, XOR gate 253 and inverter 254 are removed and replaced with inverters 953-954.As mentioned, the handshake logic in bus structure 915 of FIG. 915 operates in a “4-phase mode”, which is illustrated in FIG. 10. In a 4-phase handshake mode, only one edge of the triggering signal (either the acknowledge signal from the subsequent handshake circuit (ACK_IN) or the ready signal from the instant handshake circuit (RDY_IN)) is used to enable the transfer of new data to the data latches. In the pictured embodiment, the falling edge of the triggering signal is used to enable the transfer of new data into the latches. However, it will be clear to those of skill in the art that the circuitry in the 4-phase embodiments shown herein could be adapted to use the rising edge of the triggering signal for this purpose. The ACK_IN and RDY_IN signals can actually change value in either order, or simultaneously. However, in all of these situations, in 4-phase mode only the rising or the falling edge of the triggering input signal, and not both, enables a transfer of new data to the latches.FIG. 11 illustrates a second exemplary programmable routing structure operating in a 2-phase handshake mode that can be used, for example, in the IC of FIG. 1. For ease of illustration, the same numerical labels are used in FIG. 11 as in FIG. 2 to refer to the same items. However, in alternative embodiments the items may be different.The routing structure of FIG. 11 utilizes a novel bus structure in which the data routing multiplexers are absorbed into the data storage elements. Thus, each storage element 1130(1:N) includes a data multiplexer 1131 that selects one of M data inputs, e.g., data bits from other routing structures or logic blocks, and a latch having a data input driven by the data multiplexer. The select inputs of the data multiplexers are driven by the control inputs AND_OUT(1:M) of the storage element. Thus, the data multiplexers implement the enable function for the storage element/latch. In the pictured embodiment, the latch includes an inverter 1133 and a NAND gate 1132 having a reset input RST, and drives the data output DATA_OUT(1:N) through another inverter 1134. However, it will be clear to those of skill that the latch can be implemented using many different known methods.Importantly, the control inputs of the storage element are driven by logic gates (M logical AND gates 1151 in the pictured embodiment) that combine values Q(1:M) from the memory cells QC(1:M) with a control signal EN_DATA from the handshake logic 1120. In the pictured embodiment, each input to the data multiplexers is controlled by a separate memory cell MC(1:M). Thus, each AND gate output AND_OUT(i) is high only when the corresponding memory cell MC(i) stores a high value and the NOR gate 253 is providing a high enable signal EN_DATA.Multiplexer 1131 may be implemented as a single-stage multiplexer (see FIGS. 7 and 8), or as a multi-stage multiplexer. It will be clear to those of skill in the art that in the multi-stage embodiments, the logical AND gates need be applied only to the final stage of the multiplexer. In other embodiments, the logical AND gates are applied to an earlier stage, e.g. the first stage, instead of to the final stage.Handshake circuit 1120 includes a C-element 240 (which may be similar to C-element 240, as shown, or may be another implementation) and an inverter 1128, coupled together as shown in FIG. 11. The enable signal EN_DATA is provided by XOR gate 253, driven by the ACK_OUT signal and the inverse of the ACK_IN signal, in a similar fashion to the embodiment of FIG. 2. Thus, it is clear that the handshake logic for this routing structure operates in a 2-phase mode, as described above in conjunction with FIGS. 2 and 6.FIG. 12 illustrates how the performance of the embodiment of FIG. 11 can be improved by using multiple power high voltages. In the embodiment of FIG. 12, the logic in the circuit portion 1200 is implemented using a higher power high voltage than the logic outside portion 1200. Thus, the circuits in portion 1200 (which include the routing multiplexers/demultiplexer, those elements most likely to slow the circuit) will operate at a faster speed than they would have at the standard power high voltage. To operate properly and without damaging the transistors, transistors in this portion of the routing structure utilize a thicker oxide than transistors outside of portion 1200. This technique may also be applied to the other embodiments illustrated herein. Note that the higher power high voltage is only applied to the gates (i.e., the select inputs) of the multiplexers/demultiplexers in portion 1200, and not to the data inputs/outputs.Note that logical AND gates 1151 are operating at the higher power high voltage VGG, and each logical AND gate 1151 has one input at each of the two voltages, i.e., one of signals Q(1:M) at the higher voltage VGG and signal EN_DATA at the lower power high voltage VDD. Traditionally, such a logical AND gate may be implemented as shown in FIG. 13, for example.The logical AND gate of FIG. 13 includes N-channel transistors 1303-1306 and P-channel transistors 1301-1302, coupled together as shown in FIG. 13. Note that the two input signals must be inverted, so the structure requires two additional inverters (not shown), and the circuit structure is actually driven by the four signals EN_DATA, EN_DATAB, Q(i), and QB(i). Routing these additional signals consumes additional metal tracks, and can adversely impact the layout of the circuit. Additionally, the embodiment of FIG. 13 does not drive the output strongly, so an additional inverter on the output AND_OUT(i) is desirable.The circuit of FIG. 13 can be used in the embodiment of FIG. 12, if desired. However, FIG. 14 shows another implementation of a logical AND gate that can be used instead of the known implementation shown in FIG. 13. The implementation of FIG. 14 has the advantage that the Q(i) input signal need not be inverted, and there is no need for an additional inverter on the output. Thus, the circuit of FIG. 14 uses fewer transistors than the circuit of FIG. 13.AND logic circuit 1420 of FIG. 14 includes P-channel transistors 1421-1422, N-channel transistor 1423, and inverter 1424, coupled together as shown in FIG. 14. When used as shown in FIG. 12, the EN_DATAB input of the AND logic circuit operates at the first (lower) power high level VDD, and the Q(i) input from the memory cell operates at the second (higher) power high level VGG. The EN_DATAB signal is the inverse of the EN_DATA signal, and may be easily generated by adding an inverter to the circuit of FIG. 12. The output of AND logic circuit 1420 operates at the second power high level VGG. (A signal is said herein to “operate at” a given voltage level when the value varies between ground GND and the given voltage level.) AND logic circuit 1420 operates as follows.When input Q(i) is low, transistor 1423 is turned off, transistor 1421 pulls internal node INT high, driving output AND_OUT low through inverter 1424. The low value on output AND_OUT turns on transistor 1422, pulling internal node INT to the value of power high VGG. The VGG value on node INT fully turns off the P-channel transistor in inverter 1424, essentially eliminating the crowbar current through the inverter. Thus, when input Q(i) is low, output AND_OUT is also low.When input Q(i) is high (with the value of power high VGG), transistor 1421 is off and transistor 1423 is on. Thus, AND logic circuit 1420 is essentially a half-latch driven by signal EN_DATAB through transistor 1423. A low value on input EN_DATAB is passed through transistor 1423 and inverted by inverter 1424 to provide a high value on output AND_OUT(i). A high value on input EN_DATAB is passed through transistor 1423 and inverted by inverter 1424 to provide a low value on output AND_OUT(i).In many situations, the AND logic circuit of FIG. 14 can satisfactorily be used to implement an AND function with two different input voltage levels and an output driven at the higher of the two voltage levels. However, for some combinations of values for VDD, VGG, and Vtn (the threshold voltage of transistor 1423) there may be undesirable current flow from VGG to VDD. When input Q(i) is high and input EN_DATAB is high, there may be current flow between the two power high voltages VGG and VDD, through transistors 1422 and 1423. This current flow may be overcome by adding a pulsed driver circuit to the logical AND circuit, as shown in FIG. 15.The circuit structure of FIG. 15 includes a pulsed driver circuit 1510 and one or more AND logic circuits 1420(1:M). Pulsed driver circuit 1510 operates at the lower power high voltage VDD, has an input EN_DATAB operating at VDD, and an output operating at VDD that provides signal P_EN to AND logic circuits 1420(1:M). In response to a falling edge on signal EN_DATAB, pulsed driver circuit 1510 drives a high value onto output P_EN, and then releases the output signal P_EN to be driven high by AND logic circuits 1420(1:M).Pulsed driver circuit 1510 includes P-channel transistors 1511-1512, N-channel transistors 1513 and 1516, and inverters 1514-1515, coupled together as shown in FIG. 15. The circuit structure of FIG. 15 operates as follows.When input Q(i) is low, transistor 1423 is turned off, transistor 1421 pulls internal node INT high, driving output AND_OUT low through inverter 1424. The low value on output AND_OUT turns on transistor 1422, reinforcing the high value on internal node INT. Thus, when input Q(i) is low, output AND_OUT is also low, regardless of the value of input EN_DATAB.When input Q(i) is high (with the value of power high VGG), transistor 1421 is off and transistor 1423 is on. Thus, AND logic circuit 1420 is essentially a half-latch driven by signal P_EN through transistor 1423. A falling edge on input EN_DATAB turns on transistor 1512. Transistor 1511 is already on, because signal P_EN was low and the low value was passed to the gate of transistor 1511 through feedback path 1516-1514. Thus, signal P_EN goes high with a value of power high VDD. The high value is passed through transistor 1423 and inverted by inverter 1424 to provide a low value on output AND_OUT(i). The high value on signal P_EN also passes to the gate of transistor 1511 through the feedback path 1516-1514, and turns off transistor 1512. Therefore, pulsed driver circuit 1510 stops driving signal P_EN. However, signal P_EN remains high, because transistors 1423 and 1422 are on. However, signal P_EN is now at the VGG power high level, rather than at VDD.When input Q(i) is high and a rising edge is received on input EN_DATAB, signal P_EN is pulled low through transistor 1513. The low value passes through transistor 1423 and is inverted by inverter 1424 to provide a high value on output AND_OUT(i).FIG. 16 illustrates how the exemplary routing structure of FIG. 11 can be modified to operate in a 4-phase handshake mode that can be used, for example, in the IC of FIG. 1. For ease of illustration, the same numerical labels are used in FIG. 16 as in FIGS. 2 and 11 to refer to the same items. However, in alternative embodiments the items may be different. To change the handshake logic of FIG. 11 from a 2-phase mode to a 4-phase mode, XOR gate 253 is removed and the EN_DATA signal is the same as the ACK_OUT signal. Otherwise, the logic remains the same.FIG. 17 illustrates a third exemplary programmable routing structure operating in a 2-phase handshake mode that can be used, for example, in the IC of FIG. 1. The programmable switching structure 210 is the same as that of FIG. 2, although it can differ in some embodiments. The bus structure 1715 is similar to bus structure 215 of FIG. 2, but utilizes different implementations of the C-element and the data storage elements.Handshake circuit 1760 includes a known C-element 1740 that includes P-channel transistors 1761-1765, N-channel transistors 1766-1770, and inverter 1771, coupled together as shown in FIG. 17. The functionality of C-element 1740 is the same as C-element 240 of FIG. 2, but in some circumstances the implementation of FIG. 17 may be preferred. In C-element 1740, the feedback inverter has been replaced by stacked devices, so the feedback inverter turns off when a new value is being written to the latch. Therefore, the sizing of the transistors is less important. Handshake circuit 1760 also includes inverter 1772, which is driven by the acknowledge line ACK_IN.Each data storage element 1780(1:N) includes P-channel transistor 1781 and N-channel transistor 1784 coupled to form a CMOS transmission gate enabled by a high value on the EN_DATA signal from XOR gate 1754. Inverter 1755 provides the complement (active low) enable input signal from the active high enable signal EN_DATA. The CMOS transmission gate drives inverter 1787, which feeds back to control the structure formed from P-channel transistors 1782-1783 and N-channel transistors 1785-1786, coupled in series between power high VDD and ground GND. Thus, transistors 1782-1783, 1785-1786 and inverter 1787 form a latch that provides the storage function for the storage element 1780(1:N). An inverter 1788 buffers the output DATA_OUT(1:N) from the data storage element 1780(1:N).FIG. 18 illustrates how the exemplary routing structure of FIG. 17 can be modified to operate in a 4-phase handshake mode that can be used, for example, in the IC of FIG. 1. For ease of illustration, the same numerical labels are used in FIG. 18 as in FIG. 17 to refer to the same items. However, in alternative embodiments the items may be different. To change the handshake logic of FIG. 17 from a 2-phase mode to a 4-phase mode, XOR gate 1754 is replaced by an inverter 1854 driven by signal ACK_OUT from the C-element, and inverter 1855 replaces inverter 1755, in bus structure 1815. Thus, the enable signal EN_DATAB for the latches is active low, rather than active high as in the embodiment of FIG. 17.FIG. 18 also includes exemplary initialization logic that can be used to place the handshake logic and data lines into known states, e.g., at power-up or during a configuration sequence for a programmable IC. Handshake circuit 1860 includes NAND gate 1872 driven by the acknowledge line ACK_IN and an input signal GHIGHB. Handshake circuit 1860 also includes N-channel transistors 1873, 1874, and 1875 coupled together as shown in FIG. 18 and driven by NAND gate 1872, input signal GHIGHB, and a strobed input signal STR, respectively. Signals GHIGHB and STR are used as part of the initialization process, which is discussed in conjunction with FIGS. 19-21.The ready input RDY_IN to the C-element and a node DATA_IN(1:N) on each data line also have a pullup 1851-1853 to power high (VDD in the pictured embodiment; VGG in other embodiments). In the pictured embodiment, these initialization transistors are gated by an input signal GHIGHB. Input signal GHIGHB is also used as part of the initialization process, which is discussed in conjunction with FIGS. 19-21.FIGS. 19 and 20 are flow diagrams illustrating methods of initializing routing structures in ICs, where the routing structures include data lines and handshake circuitry. The methods of FIGS. 19-20 can be applied, for example, to the circuit of FIG. 18. With the addition of appropriate initialization circuitry, the methods of FIGS. 19 and 20 can also be applied to the other exemplary routing structure embodiments illustrated herein. Those of skill in the art will have the ability to develop such circuitry after review and study of the embodiments disclosed in FIGS. 18-21 herein and in view of the following description of the initialization process.The method illustrated in FIG. 19 can be applied to ICs that may or may not be programmable, i.e., the ICs may be non-programmable ICs, partially programmable ICs, fully programmable ICs, PLDs, FPGAs, CPLDs, and so forth.In step 1905, a node on each of the data lines is driven to a predetermined value (e.g., a high value in the embodiment of FIG. 18). In step 1910, the handshake circuitry is disabled by disabling an acknowledge path within the handshake circuitry. In the pictured embodiments, the handshake circuitry is disabled by forcing all acknowledge signals in the acknowledge path to signal an acknowledgement of received data (e.g., all signals ACK_OUT are driven high in FIG. 18). As a result, the predetermined value is propagated throughout the data lines (action 1915).In some embodiments, disabling the acknowledge path causes latches on the data lines to be enabled to pass the predetermined value (e.g., in FIG. 18, the high values on the DATA_IN nodes are passed through the latches to the DATA_OUT outputs).In some embodiments, the acknowledge signals in the acknowledge path are forced to signal an acknowledgement of received data (e.g., ACK_OUT is forced high in FIG. 18) by forcing all ready signals RDY_IN within the handshake circuitry to the predetermined value (a low value on signal GHIGHB pulls signal RDY_IN high through transistor 1851 in FIG. 18) and placing associated C-elements 1740 in a state where each C-element passes the predetermined value from the associated ready signal RDY_IN to an associated acknowledge signal ACK_OUT (the low value on signal GHIGHB forces the output of NAND gate 1872 high, placing the C-element 1740 in a state where it passes a high value but not a low value).Note that steps 1905 and 1910 may occur concurrently. In one embodiment, the driving and disabling occur in response to an initialization signal assuming a first value (e.g., GHIGHB assumes a low value in FIG. 18).In step 1920, the handshake circuitry is enabled by enabling the acknowledge path (e.g., releasing the ACK_OUT signals in FIG. 18). As a result, the data lines are released to assume values determined by operation of the IC (action 1925). The enablement and release may occur at a point in time after the initialization signal assumes a second value, where the second value is opposite to the first value (e.g., the second value is a high value in FIG. 18).FIG. 20 is a flow diagram illustrating a method of initializing a routing structure in a programmable IC. For example, the IC in these embodiments may be a partially programmable IC, fully programmable IC, PLD, FPGA, CPLD, and so forth.In step 2005, a node on each of the data lines is driven to a predetermined value (e.g., a high value in the embodiment of FIG. 18). In step 2010, the handshake circuitry is disabled by disabling an acknowledge path within the handshake circuitry. As a result, the predetermined value is propagated throughout the data lines (action 2015). In the pictured embodiments, the handshake circuitry is disabled by forcing all acknowledge signals in the acknowledge path to signal an acknowledgement of received data (e.g., all signals ACK_OUT are driven high in FIG. 18).In some embodiments, disabling the acknowledge path causes latches on the data lines to be enabled to pass the predetermined value (e.g., in FIG. 18, the high values on the DATA_IN nodes are passed through the latches to the DATA_OUT outputs).In some embodiments, the acknowledge signals in the acknowledge path are forced to signal an acknowledgement of received data (e.g., ACK_OUT is forced high in FIG. 18) by forcing all ready signals RDY_IN within the handshake circuitry to the predetermined value and placing associated C-elements 1740 in a state where each C-element passes the predetermined value from an associated ready signal RDY_IN to an associated acknowledge signal ACK_OUT.Note that steps 2005 and 2010 may occur concurrently (e.g., as in the embodiment of FIG. 18). In one embodiment, the driving and disabling occur in response to an initialization signal assuming a first value (e.g., GHIGHB assumes a low value in FIG. 18). In this embodiment, the method illustrated in FIG. 20 occurs in response to a configuration sequence for the programmable IC, and the nodes on the data lines are driven to the predetermined value by (for example) pullups 1852-1853 in FIG. 18. In another embodiment, the nodes on the data lines are driven to the predetermined value by forcing data outputs from the logic blocks to the predetermined value (e.g., a high value), and these values are propagated throughout the data lines by the disabling step 2010. In these embodiments, pullups 1852-1853 may be omitted.In step 2020, configuration values are programmed into the programmable IC. In step 2025, the handshake circuitry is enabled by enabling the acknowledge path (e.g., releasing the ACK_OUT signals in FIG. 18). As a result, the data lines are released to assume initial values determined by the programmed configuration values. Clearly, the data lines may assume other values during operation of the design implemented by the configuration values. The enablement and releasing may occur at a point in time after the initialization signal assumes a second value, where the second value is opposite to the first value (e.g., the second value is a high value in FIG. 18).FIG. 21 is a waveform diagram illustrating in more detail how the methods of FIGS. 19 and 20 can be applied to the circuitry of FIG. 18 when used in a programmable IC. FIG. 21 illustrates the signal values that would occur in the routing structure of FIG. 18 during configuration, start-up, and operation phases of the programmable IC.The circuit of FIG. 18 has two input signals relating to the initialization process: GHIGHB and STR.The GHIGHB (global-high-bar) signal is low during power-up and remains low during the configuration phase of a programmable IC, e.g., while configuration data is programmed into the programmable IC. Signal GHIGHB goes high after completion of the configuration phase, and remains high thereafter.Strobe signal STR is initially low, and exhibits a high pulse after signal GHIGHB goes high. The high pulse may be initiated by a rising edge on signal GHIGHB, or by other means. The release of signal STR to a low value signals the end of the configuration sequence, and normal operation of the circuit implemented in the programmable IC begins.During the configuration phase, nodes DATA_IN(1:N) are forced high by the GHIGHB signal turning on pullups 1852-1853. (See step 2005 in FIG. 20.) Similarly, all of the ready signals RDY_IN are forced high as the GHIGHB signal turns on pullups 1851. The low value on signal GHIGHB also forces the output of NAND gate 1872 high, which allows the high value on node RDY_IN to be passed through C-element 1740, driving signal ACK_OUT high. Thus, the acknowledge path is disabled, with all of the acknowledge signals in the acknowledge path signaling an acknowledgement of received data (see step 2010).Because signal ACK_OUT is high, EN_DATAB goes low, enabling (opening) all of the latches 1780(1:N). The high values on nodes DATA_IN(1:N) are propagated to the DATA_OUT(1:N) outputs and throughout all of the data lines on the IC (action 2015).For the duration of the configuration phase (step 2020), as the configuration data is programmed into the programmable IC, the C-element 1740 will pass only high values, because of the low value on signal GHIGHB. Therefore, the ACK_OUT signals remain high, and the EN_DATAB signals remain low. The data latches continue to pass data freely.During the start-up phase, after configuration is complete and signal GHIGHB goes high, a strobe signal STR pulses high (e.g., triggered by the falling edge of signal GHIGHB). Strobe signal STR is included to accommodate the programmable nature of the IC. A design implemented in a programmable IC typically does not use all of the programmable resources of the IC. Once the design begins to operate, the used interconnect will assume values determined by the operation of the IC. However, the unused interconnect will not be driven once the design begins to operate, except by the data latches. Therefore, the high pulse on strobe signal STR performs the function of closing all the data latches, latching the predetermined value (e.g., the high value) into the data latches, and ensuring that all unused data lines continue to be driven to the predetermined value during operation of the design.When the STR signal goes low again, the acknowledge path is enabled (step 2025, the ACK_IN signals are no longer pulled low), and the data lines are released to assume initial values determined by the programmed configuration values (action 2030). These values are then free to vary as determined by the normal operation of the design.Those having skill in the relevant arts of the invention will now perceive various modifications and additions that can be made as a result of the disclosure herein. For example, pullups, pulldowns, transistors, P-channel transistors, N-channel transistors, N-channel pass gates, CMOS transmission gates, multiplexers, demultiplexers, logical AND gates, XOR gates, inverters, tristate inverters, C-elements, storage elements, latches, initialization circuitry, handshake circuits, routing structures, programmable switching structures, bus structures, memory cells, and other components other than those described herein can be used to implement the invention. Active-high signals can be replaced with active-low signals by making straightforward alterations to the circuitry, such as are well known in the art of circuit design. Logical circuits can be replaced by their logical equivalents by appropriately inverting input and output signals, as is also well known.Moreover, some components are shown directly connected to one another while others are shown connected via intermediate components. In each instance the method of interconnection establishes some desired electrical communication between two or more circuit nodes. Such communication can often be accomplished using a number of circuit configurations, as will be understood by those of skill in the art.Accordingly, all such modifications and additions are deemed to be within the scope of the invention, which is to be limited only by the appended claims and their equivalents. Note that claims listing steps do not imply any order of the steps. Trademarks are the property of their respective owners. |
The application relates to a memory component with internal logic to perform a machine learning operation. The memory component includes a first region of memory cells to store a machine learning model and a second region of the memory cells to store input data and output data of a machine learning operation. The memory component can further include in-memory logic coupled to the first region of the memory cells and the second region of the memory cells via one more internal buses to perform the machine learning operation by applying the machine learning model to the input data to generate the output data. |
1.A memory component, which includes:The first area of a plurality of memory units used to store the machine learning model;A second area of the plurality of memory units for storing input data and output data of machine learning operations; andIn-memory logic coupled to the first area of the memory unit and the second area of the memory unit via another internal bus to be executed by applying the machine learning model to the input data The machine learning operates to generate the output data.2.The memory component of claim 1, wherein the in-memory logic corresponds to a resistor array, and wherein the in-memory logic further:Programming the resistance values of the resistors of the resistor array based on the machine learning model.3.The memory component of claim 1, further comprising another area of the plurality of memory cells corresponding to the in-memory logic, and wherein the in-memory logic further:Programming the memory cell of the other area of the memory cell based on the machine learning model.4.The memory component of claim 3, wherein the programming of the memory unit is further based on a weight between a node of the machine learning model and a number of pairs of nodes of the machine learning model.5.The memory component of claim 1, further comprising another area of the plurality of memory units to store host data separated from the machine learning operation.6.The memory component of claim 1, wherein the one or more internal buses are inside the memory component.7.The memory component of claim 1, wherein the machine learning model is a neural network machine learning model.8.A method including:Receive a request to perform a machine learning operation at the memory component;In response to receiving the request, allocating a part of the plurality of memory units of the memory component to perform the machine learning operation;Determining, by the processing device, the remaining part of the plurality of memory units of the memory component that is not allocated for the execution of the machine learning operation;Receiving host data to be stored at the memory component; andThe host data is stored at the remaining portion of the plurality of memory units of the memory component that are not allocated for the execution of the machine learning operation.9.8. The method of claim 8, wherein allocating the portion of the plurality of memory units of the memory component to perform the machine learning operation comprises:The number of the portions of the plurality of memory cells is programmed based on the machine learning model associated with the machine learning operation.10.The method of claim 9, wherein the machine learning model is associated with a plurality of nodes and weights of edges between pairs of nodes of the plurality of nodes, and wherein the programming of the pairs of memory cells Is based on the weights of the edges between the plurality of nodes and pairs of nodes.11.The method of claim 8, wherein the machine learning operation is associated with a neural network machine learning model.12.The method of claim 8, further comprising:The host system is provided with an indication of the capacity of the remaining portion of the plurality of memory units not allocated for the execution of the machine learning operation to store data from the host system.13.The method of claim 8, further comprising:Receiving an instruction to change the machine learning model used by the machine learning operation;In response to receiving the instruction to change the machine learning model, allocating another part of the plurality of memory units of the memory component to perform the machine learning operation using the changed machine learning model; andThe processing device determines another remaining portion of the plurality of memory units of the memory component that is not allocated for the execution of the machine learning operation based on the other portion of the allocation of the plurality of memory units.14.8. The method of claim 8, wherein the execution of the machine learning operation corresponds to applying a machine learning model to input data stored at the memory component.15.A system including:Memory components; andA processing device that is operatively coupled with the memory component to:Receive a request to perform a machine learning operation at the memory component;In response to receiving the request, allocating a part of the plurality of memory units of the memory component to perform the machine learning operation;Determining the remaining portion of the plurality of memory units of the memory component that is not allocated for the execution of the machine learning operation;Receiving host data to be stored at the memory component; andThe host data is stored at the remaining portion of the plurality of memory units of the memory component that are not allocated for the execution of the machine learning operation.16.The system according to claim 15, wherein in order to allocate the portion of the plurality of memory units of the memory component to perform the machine learning operation, the processing device further:The number of the portions of the plurality of memory cells is programmed based on the machine learning model associated with the machine learning operation.17.The system of claim 16, wherein the machine learning model is associated with a plurality of nodes and weights of edges between pairs of nodes of the plurality of nodes, and wherein the programming of the pairs of memory cells Is based on the weights of the edges between the plurality of nodes and pairs of nodes.18.The system of claim 15, wherein the machine learning operation is associated with a neural network machine learning model.19.The system according to claim 15, wherein the processing device further:The host system is provided with an indication of the capacity of the remaining portion of the plurality of memory units not allocated for the execution of the machine learning operation to store data from the host system.20.The system of claim 15, wherein the execution of the machine learning operation corresponds to applying a machine learning model to input data stored at the memory component. |
Memory component with internal logic to perform machine learning operationsTechnical fieldThe present disclosure generally relates to a memory component, and more specifically relates to a memory component having internal logic to perform machine learning operations.Background techniqueThe memory subsystem can be a storage device, a memory module, and a mixture of a storage device and a memory module. The memory subsystem may include one or more memory components that store data. The memory component may be, for example, a non-volatile memory component and a volatile memory component. In general, the host system can utilize the memory subsystem to store data at and retrieve data from the memory component.Summary of the inventionAn embodiment of the present application provides a memory component that includes: a first area for storing a plurality of memory units of a machine learning model; the plurality of memory units for storing input data and output data of a machine learning operation A second area of a memory unit; and in-memory logic, which is coupled to the first area of the memory unit and the second area of the memory unit via another internal bus, so that the machine learning The model is applied to the input data to perform the machine learning operation to generate the output data.Another embodiment of the present application provides a method, which includes: receiving a request to perform a machine learning operation at a memory component; in response to receiving the request, allocating a part of a plurality of memory units of the memory component to perform all The machine learning operation; the remaining part of the plurality of memory units of the memory component that is not allocated for the execution of the machine learning operation is determined by the processing device; the host to be stored at the memory component is received Data; and storing the host data at the remaining portion of the plurality of memory units of the memory component that is not allocated for the execution of the machine learning operation.Another embodiment of the present application provides a system that includes: a memory component; and a processing device operatively coupled with the memory component to: receive a request to perform a machine learning operation at the memory component; and respond to After receiving the request, allocate a part of the plurality of memory units of the memory component to perform the machine learning operation; determine the plurality of the memory component that is not allocated for the execution of the machine learning operation The remaining part of a memory unit; receiving host data to be stored at the memory component; and storing the host data in the memory component that is not allocated for the execution of the machine learning operation At the remaining portion of the plurality of memory cells.Description of the drawingsThe present disclosure will be more fully understood from the detailed description given below and the accompanying drawings of various embodiments of the present disclosure.Figure 1 illustrates an example computing environment including a memory subsystem according to some embodiments of the present disclosure.Figure 2 illustrates an example computing environment including one or more memory components including machine learning operation components according to some embodiments of the present disclosure.Figure 3 illustrates an example memory component with internal machine learning operating components according to some embodiments of the present disclosure.Figure 4 is a flowchart of an example method for performing machine learning operations and for storing host data at a memory component in accordance with some embodiments.FIG. 5 illustrates an example memory component with internal machine learning operation components based on the memory unit of the memory component according to some embodiments of the present disclosure.Figure 6 is a flowchart of an example method for allocating portions of memory components to machine learning operations and for storing host data in accordance with some embodiments.Figure 7 is a flowchart of an example method for providing an indication of the capacity of a memory component to a host system based on a machine learning model in accordance with some embodiments.Figure 8 illustrates machine learning operating components implemented in the memory subsystem controller of the memory subsystem according to some embodiments of the present disclosure.Figure 9 illustrates a machine learning operation component implemented in one or more memory components of a memory subsystem according to some embodiments of the present disclosure.Figure 10 is a flowchart of an example method for performing a portion of a machine learning operation at one or more memory components of a memory subsystem in accordance with some embodiments.Figure 11 shows an example memory component and a memory subsystem according to some embodiments of the present disclosure with a single bus for transmitting data for the memory space and the machine learning space.Figure 12 is a flowchart of an example method for transmitting a requested operation to a memory space or a machine learning space based on the type of operation according to some embodiments.Figure 13 is a flowchart of an example method for providing a requested operation to a memory space or a machine learning space based on a memory address, according to some embodiments.Figure 14 shows example memory components and memory subsystems according to some embodiments of the present disclosure, which have separate buses for transmitting data for the memory space and the machine learning space.15 is a flowchart of an example method for performing operations in an order based on priority for machine learning operations according to some embodiments of the present disclosure.Figure 16A illustrates a series of operations that have been received for the memory space and the machine learning space of a memory component or a memory subsystem according to some embodiments of the present disclosure.Figure 16B illustrates a series of operations that have been based on prioritization for machine learning operations according to some embodiments of the present disclosure.Figure 17 is a flowchart of an example method for changing the performance of a machine learning operation based on a performance metric associated with a memory space in accordance with some embodiments.Figure 18 is a block diagram of an example computer system in which embodiments of the present disclosure may operate.Detailed waysAn aspect of the present disclosure is directed to a memory component with internal logic to perform machine learning operations. The memory subsystem can be a storage device, a memory module, or a mixture of a storage device and a memory module. Examples of storage devices and memory modules are described below in conjunction with FIG. 1. Generally, a host system can utilize a memory subsystem that includes one or more memory components. The host system can provide data for storage at the memory subsystem and can request data to be retrieved from the memory subsystem.The conventional memory subsystem can utilize conventional memory components to store and retrieve data for the host system. The host system can perform machine learning operations that utilize machine learning models to process data. For example, the machine learning model can be used to classify data or make other inferences or decisions based on the processing of the data using the machine learning model. A machine learning model refers to a model product that is created by a training process and can be a single-level linear or non-linear operation (e.g., support vector machine [SVM]) or, for example, a neural network (e.g., a deep neural network) , Impulse neural network, recurrent neural network, etc.) and other multi-level non-linear operations. As an example, a deep neural network model may have one or more hidden layers, and may be trained by adjusting the weight of the neural network according to a backpropagation learning algorithm or the like.Conventionally, the memory subsystem can store data to be processed (for example, input data to be applied to a machine learning model) and a machine learning model. The host system may further utilize a machine learning processor (for example, a neural network processor or a neural network accelerator) that will perform machine learning operations based on the data stored at the memory component of the memory subsystem and the machine learning model. For example, data and machine learning models can be retrieved from the memory component and provided to the machine learning processor. For certain machine learning operations, there may be repeated transmission of intermediate data of the machine learning operation (for example, intermediate data generated by different layers of the machine learning model) between the machine learning processor and the memory components of the memory subsystem. For example, during the execution of a machine learning operation, data may be transmitted via an external bus or interface between the memory component and the machine learning processor. The transmission of data, machine learning models, any intermediate data, and output data between the memory component (and/or memory subsystem) and the separate machine learning processor may take additional time or delay to couple the memory component or memory subsystem Transmit various data on a separate external bus or interface with a separate machine learning processor.Aspects of the present disclosure address the above and other deficiencies by having a memory component that has internal logic for performing machine learning operations. For example, the functionality of the machine learning processor can be implemented within the internal logic of the memory component, so that a separate external bus or interface is not used to transmit data between the memory component and/or the memory subsystem and the external machine learning processor , Machine learning models and any intermediate data. For example, the machine learning processor may be implemented with internal logic based on the memory unit of the memory component, or the machine learning processor may be implemented with internal logic based on the digital logic or resistor array implemented inside the memory component. In some embodiments, the machine learning processor may be implemented within a memory subsystem. For example, the machine learning processor may be implemented in the memory component included in the memory subsystem, and/or the machine learning processor may be implemented in the controller of the memory subsystem (also referred to as the memory subsystem controller) Internally, or as a separate component of the memory subsystem (for example, a separate machine learning processor circuit).Thus, the memory component (or memory subsystem) can be used to perform machine learning operations without using an external machine learning processor. In addition, the same memory components or memory subsystems can also be used to store and retrieve data for the host system. Therefore, the host system can use the same memory component or memory subsystem to store and retrieve host data, while also performing machine learning operations for the host system.The advantages of the present disclosure include, but are not limited to, improved performance of machine learning operations. For example, because no or less information (for example, input data, intermediate data of machine learning operations, or machine learning models) is transmitted from a memory component or memory subsystem to an external machine learning processor via an external bus or interface, Reduce the delay in providing such information for use in machine learning operations. Therefore, the internal machine learning processor can receive input data and machine learning models in less time, while also saving and retrieving intermediate data and the output results of machine learning operations in less time. Therefore, the performance of the memory component or the memory subsystem to perform machine learning operations can be improved because less time is spent on performing a single machine learning operation, thereby facilitating the memory component or the memory subsystem to perform additional machine learning operations.Figure 1 illustrates an example computing environment 100 that includes a memory subsystem 110 according to some embodiments of the present disclosure. The memory subsystem 110 may include media such as one or more volatile memory devices (e.g., storage device 140), one or more non-volatile memory devices (e.g., memory device 130), or a combination of such devices.The memory subsystem 110 may be a storage device, a memory module, or a mixture of a storage device and a memory module. Examples of storage devices include solid state drives (SSD), flash drives, universal serial bus (USB) flash drives, embedded multimedia controller (eMMC) drives, universal flash storage device (UFS) drives, and hard disk drives (HDD) ). Examples of memory modules include dual in-line memory modules (DIMMs), small form-factor DIMMs (SO-DIMMs), and non-volatile dual in-line memory modules (NVDIMMs).The computing environment 100 may include a host system 120 coupled to one or more memory subsystems 110. In some embodiments, the host system 120 is coupled to different types of memory subsystems 110. FIG. 1 shows an example of a host system 120 coupled to a memory subsystem 110. The host system 120 uses the memory subsystem 110 to, for example, write data to and read data from the memory subsystem 110. As used herein, "coupled to" generally refers to a connection between components, which can be an indirect communication connection or a direct communication connection (for example, no intervening components), whether wired or wireless, including, for example, electrical, optical , Magnetic and other connections.The host system 120 may be a computing device, such as a desktop computer, a laptop computer, a web server, a mobile device, or such a computing device including memory and processing devices. The host system 120 may be coupled to the memory subsystem 110 via a physical host interface. Examples of physical host interfaces include (but are not limited to) serial advanced technology attachment (serialadvanced technology attachment, SATA) interface, peripheral component interconnect express (PCIe) interface, universal serial bus (USB) interface, fiber channel , Serial Attached SCSI (Serial Attached SCSI, SAS), etc. The physical host interface can be used to transmit data between the host system 120 and the memory subsystem 110. When the memory subsystem 110 is coupled with the host system 120 through a PCIe interface, the host system 120 may further utilize an NVM Express (NVMe) interface to access memory components (for example, the memory device 130). The physical host interface may provide an interface for transferring control, address, data, and other signals between the memory subsystem 110 and the host system 120.The memory device may include any combination of different types of non-volatile memory devices and/or volatile memory devices. The volatile memory device (for example, the storage device 140) may be (but is not limited to) random access memory (RAM), such as dynamic random access memory (DRAM) and synchronous dynamic random access memory (SDRAM).Examples of non-volatile memory devices (e.g., memory device 130) include NAND type flash memory. Each of the memory devices 130 may include one or more arrays of memory cells, such as single-level cells (SLC) or multi-level cells (MLC) (e.g., three-level cells (TLC) or four-level cells). (QLC)). In some embodiments, a particular memory component may include the SLC portion of the memory cell, as well as the MLC portion, TLC portion, or QLC portion. Each of the memory units may store one or more data bits used by the host system 120. In addition, the memory cells of the memory device 130 may be grouped into memory pages or memory blocks, which may refer to a unit of a memory component for storing data.Although a non-volatile memory component such as NAND type flash memory is described, the memory device 130 may be based on any other type of non-volatile memory, such as read only memory (ROM), phase change memory (PCM), magnetic Random access memory (MRAM), "NOR" (NOR) flash memory, electrically erasable programmable read-only memory (EEPROM), and a cross-point array of non-volatile memory cells. The cross-point array of the non-volatile memory can be combined with a stackable cross-grid data access array to perform bit storage based on changes in body resistance. In addition, in contrast to many flash-based memories, cross-point non-volatile memory can perform in-place write operations, in which non-volatile memory cells can be programmed without pre-erasing the non-volatile memory cells .The memory subsystem controller 115 may communicate with the memory device 130 to perform operations, such as reading data, writing data, or erasing data at the memory device 130, and other such operations. The memory subsystem controller 115 may include hardware, such as one or more integrated circuits and/or discrete components, buffer memory, or a combination thereof. The memory subsystem controller 115 can be a microcontroller, a dedicated logic circuit system (for example, a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), or other suitable processor.The memory subsystem controller 115 may include a processor (processing device) 117 configured to execute instructions stored in the local memory 119. In the example shown, the local memory 119 of the memory subsystem controller 115 contains an embedded memory that is configured to store various processes, operations, logic flows, and routines for performing operations that control the memory subsystem 110 The instructions include handling the communication between the memory subsystem 110 and the host system 120.In some embodiments, the local memory 119 may include memory registers that store memory pointers, extracted data, and the like. The local memory 119 may also include read-only memory (ROM) for storing microcode. Although the example memory subsystem 110 in FIG. 1 has been shown as including the memory subsystem controller 115, in another embodiment of the present disclosure, the memory subsystem 110 may not include the memory subsystem controller 115, and may be changed to Rely on external control (for example, provided by an external host or by a processor or controller separate from the memory subsystem).Generally, the memory subsystem controller 115 can receive commands or operations from the host system 120, and can convert the commands or operations into instructions or appropriate commands to achieve the required access to the memory device 130. The memory subsystem controller 115 may be responsible for other operations, such as wear leveling operations, garbage collection operations, error detection and error correction code (ECC) operations, encryption operations, cache operations, and logical block addresses and addresses associated with the memory device 130 Address translation between physical block addresses. The memory subsystem controller 115 may further include a host interface circuit system to communicate with the host system 120 via a physical host interface. The host interface circuitry can convert commands received from the host system into command instructions for accessing the memory device 130, and convert responses associated with the memory device 130 into information for the host system 120.The memory subsystem 110 may also include additional circuitry or components not shown. In some embodiments, the memory subsystem 110 may include a cache memory or buffer (e.g., DRAM) and address circuitry (e.g., row decoder and column decoder), which can be controlled from the memory subsystem The device 115 receives the address and decodes the address to access the memory device 130.In some embodiments, the memory device 130 includes a local media controller 135 that operates in conjunction with the memory subsystem controller 115 to perform operations on one or more memory units of the memory device 130. In the same or alternative embodiments, the local media controller 135 may include a machine learning operation component 113 to perform machine learning operations, and/or the machine learning operation component 113 may be implemented based on the internal logic of the memory device 130 and/or 140.The memory subsystem 110 includes a machine learning operation component 113 that can perform machine learning operations. In some embodiments, the memory subsystem controller 115 includes at least a part of the machine learning operation component 113. For example, the memory subsystem controller 115 may include a processor 117 (processing device) configured to execute instructions stored in the local memory 119 for performing the operations described herein. In some embodiments, the machine learning operation component 113 is part of the host system 110, an application, or an operating system.The machine learning operation component 113 can be used to perform machine learning operations. For example, the machine learning operation component 113 can receive data from the memory component and can also receive the machine learning model from the memory component. The machine learning operation component 113 may perform a machine learning operation based on the received data and the received machine learning model to generate an output result. Additional details regarding the operation of the machine learning operation component 113 are described below.FIG. 2 illustrates an example computing environment including one or more memory components including a machine learning component 225 according to some embodiments of the present disclosure. Generally, the memory component 220 may correspond to the memory device 130 or the memory device 140 of FIG. 1. For example, the memory component 220 may be a volatile memory component or a non-volatile memory component.As shown in FIG. 2, the memory component 220 may include a machine learning operation component 225 that can perform machine learning operations. In some embodiments, the machine learning operation may include (but is not limited to) data processing by using the machine learning model 231 to classify data, make predictions or decisions, or any other types of output results. The machine learning model 231 may be based on, but not limited to, a neural network such as a pulse neural network, a deep neural network, or another type of machine learning model. As an example, a machine learning operation may correspond to using a machine learning model to process input image data to classify or recognize the object or subject of the input image data. In some embodiments, the machine learning model may be a neural network, which is represented by a group of nodes (ie, neurons) connected to other nodes. The connection between a pair of nodes can be called an edge. For example, the node 232 and another node 233 may be connected to the third node 234 using edges 235 and 236. Each edge in the neural network can be assigned a weight that is recognized as a numerical value. Input data (for example, data to be processed) can be provided to the node and can then be processed based on the weight of the connection edge. For example, the value of the weight of the edge may be multiplied by the input data of the node and the node at the end of the edge may accumulate multiple values. As an example, node 232 may receive input data, and node 233 may receive another input data (e.g., pixel bit values associated with an image). The specific weight value assigned to the edge 236 can be combined with the input data provided to the node 232 (eg, multiplication or other such operations) to generate an output value, and another weight value assigned to the edge 235 can be combined with the input data provided to the node 233 The other input data of is combined (e.g., multiplied) to generate another output value. The output values can then be combined (eg, accumulated) at node 234 to produce a combined output value. The combined output value from node 234 may be combined or multiplied with another weight assigned to the next edge and accumulated at the next node. For example, the machine learning model 231 may include nodes grouped into multiple layers. Signals (e.g., input data and intermediate data) can propagate through the various layers to the final layer (ie, the final output layer) where the output of the machine learning operation is generated. As previously described, the input data and other such intermediate data from the nodes are multiplied by the weight of the edge and then accumulated at the end of the edge or other nodes at the destination. In this way, machine learning operations can include multiple layers or a series of multiplication and accumulation (MAC) sub-operations.As shown, the machine learning model 231 can be implemented in the internal logic of the memory component 220. For example, the machine learning model 231 may be implemented as digital logic or a resistor array of the memory component 220 as described in connection with FIG. 3. For example, nodes, edges, and weights can be implemented in the resistor array of the digital logic or memory component. In some embodiments, the machine learning model 231 may be implemented in the memory unit of the memory component 220 as described in connection with FIG. 5. For example, nodes, edges, and weights can be implemented by using or configuring memory cells of memory components. In some embodiments, the memory subsystem controller of the memory subsystem may implement the machine learning model 231 as described in conjunction with FIG. 8. In or alternative embodiments, the machine learning model 231 may be implemented in one or more memory components 220 of the memory subsystem as described in connection with FIG. 9.Figure 3 shows an example memory component 300 with internal machine learning components according to some embodiments of the present disclosure. Generally, the memory component 300 may correspond to the memory device 130 or 140 of FIG. 1 or the memory component 220 of FIG. 2. The memory component 300 may be a volatile memory component or a non-volatile memory component.As shown in FIG. 3, the memory component 300 may include a memory unit 315 for storing data. For example, the memory component 300 may receive data from the host system 310 and may store the data at the memory unit 315 of the memory component 300. The host system may further specify the machine learning operation to be performed by the memory component 300. For example, the machine learning operation may be performed by the machine learning operation component 301, which is included in the package of the memory component 300 or inside the memory component 300. In some embodiments, the machine learning operation component 301 may correspond to digital logic for implementing machine learning operations. For example, digital logic can be used to implement a machine learning model and receive input data for the machine learning model and generate output data for the machine learning model. As previously described, the machine learning model may be the structure of the neural network and the associated weight values of the nodes and the edges between the nodes of the neural network. The machine learning operation as described in conjunction with FIG. 2 can then be performed by the digital logic of the machine learning operation component 301 based on the weight, edge, and node configuration of the machine learning model. Digital logic can be implemented by using digital logic gates or other such circuit systems. In some embodiments, the multiplication and accumulation (MAC) sub-operations of the machine learning operation may be performed by the digital logic of the machine learning operation component 301.In some embodiments, the machine learning operation component 301 may correspond to a resistor array. For example, the multiplication and accumulation sub-operations of the machine learning operation can be performed by the resistor array of the machine learning operation component 301. For example, the resistor array can represent a machine learning model. Each resistor can represent a node and the resistance value of the resistor can be programmed or tuned to correspond to the weight value of the edge between a pair of resistors representing a pair of nodes of the neural network. For example, the resistor may represent a node and the resistance of the resistor may be programmed to represent the weight value for the edge connected at the output of the resistor. The output of the resistor may be an analog value based on the programmed resistance of the resistor and the analog input to the resistor (e.g., a multiplier operation). The analog value outputs of a pair of resistors can then be combined to generate a combined analog value (e.g., accumulate sub-operation). In some embodiments, the output of the last layer of the resistor of the machine learning model can be coupled with an analog/digital (ADC) converter to convert one or more analog signals that are the final value of the machine learning model into A digital signal representing the output of a machine learning model.In operation, the memory component 300 can store input data 303 for use in machine learning operations. For example, the input data 303 may be images, audio, text, or any other data. The input data 303 may be stored at a specific area of the memory unit of the memory component 300 that has been allocated for storing input data for machine learning operations. The allocated area of the memory unit can store a number of different input data that can each be used during machine learning operations. In some embodiments, the input data 303 may be provided by the host system 310. For example, the host system 310 may transmit input data 303 to a memory component or a memory subsystem including the memory component. In the same or alternative embodiment, the host system 310 may provide an indication that a machine learning operation will be performed on the input data 303. For example, the host system 310 can identify specific input data to be used for machine learning operations. The machine learning model 302 may store information specifying the structure (for example, edges, nodes, and weight values) of one or more machine learning models. For example, another area of the memory unit of the memory component 300 may be allocated to store the machine learning model 302. In some embodiments, another area may store different machine learning models.In operation, the host system 310 may specify specific input data and specific machine learning models to be used with machine learning operations. The machine learning operation component 301 can receive a machine learning model 302 corresponding to a specified machine learning model, and can be configured or programmed to implement a machine learning operation based on the machine learning model 302. For example, the multiplication and accumulation sub-operations may be performed according to the machine learning model 302. In some embodiments, the resistance value of the digital logic or resistor array may be configured to perform multiplication and accumulation sub-operations based on the machine learning model 302. The machine learning operation component 301 can then perform a machine learning operation by retrieving the specified input data 303 and processing the retrieved input data 303 based on the machine learning model 302 to generate output data 304. For example, the output data may be stored in another area of the memory unit of the memory component 300 allocated to store the result of the machine learning operation component 301. In some embodiments, the output data 304 may be transmitted back to the host system 310. In the same or alternative embodiment, the machine learning operation component 301 may provide an indication or notification to the host system 310 that the requested machine learning operation has been completed and the resulting output data 304 has been stored at a specific location at the memory subsystem. The host system 310 may later request the resulting output data 304 by designating a specific location where the output data 304 has been stored.In some embodiments, the machine learning model 302, the input data 303, and the output data 304 may be stored in a portion of the memory component that is closer to or closer to the machine learning operation component 301 than other memory units of the memory component.Therefore, the memory component can be used to store data for the host system. The same memory component can contain internal logic to perform machine learning operations for the host system. The internal logic may be coupled with the memory cells of the memory component via one or more internal buses that are not external to the memory component.Figure 4 is a flowchart of an example method 400 for performing machine learning operations and for storing host data at a memory component in accordance with some embodiments. The method 400 may be executed by processing logic, which may include hardware (e.g., processing device, circuit system, dedicated logic, programmable logic, microcode, device hardware, integrated circuit, etc.), software (e.g., on the processing device Instructions to run or execute), or a combination thereof. In some embodiments, the method 400 is executed by the machine learning operation component 113 of FIG. 1. Although shown in a specific order or order, unless otherwise specified, the order of the processes can be modified. Therefore, the illustrated embodiments should be understood as examples only, and the illustrated processes may be performed in a different order, and some processes may be performed in parallel. In addition, one or more processes may be omitted in various embodiments. Therefore, not all processes are required in every embodiment. Other process flows are also possible.As shown in FIG. 4, at operation 410, the processing logic receives a request to perform a machine learning operation at the memory component. The machine learning operation can be specified by the host system. For example, the host system may provide input data to be processed and analyzed by a machine learning operation to generate output data, or the host data may specify input data currently stored at the memory component. In some embodiments, machine learning operations may be performed by digital logic or resistor arrays contained within the memory component. For example, machine learning operations can be performed by the internal logic of the memory component. In the same or alternative embodiments, the machine learning operation may be performed by the memory unit of the memory component as described in connection with FIG. 5. The machine learning operation can be neural network processing of input data as previously described. In addition, the host system can specify a specific machine learning model to be used during the machine learning operation.At operation 420, the processing logic performs a machine learning operation at the memory component. For example, machine learning operations can be performed by the internal logic of the memory component as previously described. In some embodiments, the machine learning operation component may be configured to perform machine learning operations based on the machine learning model. At operation 430, the processing logic receives host data from the host system. For example, the host system may provide data for storage at the memory component or a memory subsystem including the memory component. The host data may be data that is not intended to be used by machine learning operations. For example, the host data may be other data to be written to the memory component and to be retrieved from the memory component in response to a subsequent read operation from the host system. At operation 440, the processing logic stores the host data from the host system at the same memory component where the machine learning operation has been performed. For example, host data can be stored across memory cells of a memory component, which also contains internal logic to perform machine learning operations. In addition, the internal logic can be separated from the memory unit of the memory component. Thus, the same memory components can be used to store host data and perform machine learning operations for the host system.Figure 5 illustrates an example memory component 500 with internal machine learning operating components based on memory cells of the memory component according to some embodiments of the present disclosure. Generally, the memory component 500 may correspond to the memory device 130 or 140 of FIG. 1 or the memory component 220 of FIG. 2. The memory component 500 may be a volatile memory or a non-volatile memory. In addition, the machine learning operation component 501 may correspond to the machine learning operation component 113 of FIG. 1.As shown in FIG. 5, the machine learning operation component 501 may be based on the memory unit of the memory component 500. For example, the memory unit of the memory component 500 may be used to implement a machine learning model 502 of machine learning operations. In some embodiments, the conductivity of different memory cells can be used to implement a machine learning model. For example, each memory cell may correspond to a node of a neural network, and the conductivity of the memory cell may be programmed to correspond to a weight value to be applied to the input of the memory cell. For example, the memory cell may be programmed to a specific conductivity, so that when an input is applied to the memory cell, the change (e.g., multiplication) of the input signal applied by the memory cell may be based on the conductivity of the memory cell. In addition, the memory unit can receive multiple input signals from the output of other memory units. Such input signals can be accumulated at the memory cell and then multiplied based on the conductivity of the memory cell. Thus, the multiplication and accumulation sub-operations as previously described can be performed by configuring or programming memory cells to represent the nodes, edges, and weight values of the machine learning model.The input data 503 may be data to be processed by a machine learning operation as previously described. For example, input data 503 may be received from the host system 510 for analysis by a machine learning operation, and the input data may be stored in the area of the memory unit of the memory component 500. Machine learning operations can be performed using input data 503, and output data 504 can be generated based on the input data 503 and the machine learning model 502 as previously described. For example, the machine learning operation component 501 may configure a memory unit for machine learning operations (for example, multiplication and accumulation sub-operations) based on the machine learning model 502. The configuration of the memory cell may correspond to programming of the conductivity of the memory cell based on the weight value specified by the machine learning model 502. The machine learning operation component 501 may then process the input data 503 to generate output data 504, and store the output data 504 in another area of the memory unit of the memory component 500. In some embodiments, the output data 504 may be transmitted back to the host system 510. In the same or alternative embodiments, the machine learning model 502, the input data 503, and the output data 504 may be stored closer to or closer to the machine than other memory units of the memory component used to store host data not used by the machine learning operation. Learning operation component 501 at the memory unit.In this way, the memory unit of the same memory component can be used to store data from the host system 510 and perform machine learning operations for the host system 510. For example, the area of the memory unit can be used to store host data, and to return the host data in response to a read request from the host system 510. Another different area of the memory unit can be used to represent the nodes and weights of the neural network for machine learning operations. In some embodiments, the region of the memory cell may be coupled with the internal bus of the memory cell.Figure 6 is a flowchart of an example method 600 for allocating portions of memory components to machine learning operations and for storing host data in accordance with some embodiments. The method 600 may be executed by processing logic, which may include hardware (e.g., processing device, circuit system, dedicated logic, programmable logic, microcode, device hardware, integrated circuit, etc.), software (e.g., on the processing device Instructions to run or execute), or a combination thereof. In some embodiments, the method 600 is executed by the machine learning operation component 113 of FIG. 1. Although shown in a specific order or order, unless otherwise specified, the order of the processes can be modified. Therefore, the illustrated embodiments should be understood as examples only, and the illustrated processes may be performed in a different order, and some processes may be performed in parallel. In addition, one or more processes may be omitted in various embodiments. Therefore, not all processes are required in every embodiment. Other process flows are also possible.As shown in FIG. 6, at operation 610, the processing logic receives a request to perform a machine learning operation at the memory component. For example, the host system may provide an indication that the memory component will perform machine learning operations. As previously described, the machine learning operation may be a neural network operation. The host system may store data at the memory component and indicate that the input to the machine learning operation is data that has been stored at the memory component. For example, the host system can specify or identify the specific data or the location of the data stored at the memory component, which should be the input to the machine learning operation. At operation 620, the processing logic allocates a portion of the memory component to the machine learning operation. For example, a region or part of a memory unit of a memory component can be used to implement machine learning operations based on a machine learning model. In some embodiments, the machine learning model may be received from the host system or may be received from another area of the memory component. As previously described, the allocation of parts of memory components may correspond to the programming of memory cells to implement a machine learning model, such as a neural network. At operation 630, the processing logic determines the remaining portion of the memory component that is not allocated to the machine learning operation. For example, another area or part of the memory unit that is not used to implement machine learning operations can be identified. For example, the remaining memory cells of the memory component that can be used to store host data can be determined. In some embodiments, the data structure may store information identifying the area or memory unit (or data block or other such data unit) of the memory component used to implement the machine learning operation, and when the machine learning operation is implemented within the memory component Another area or memory unit (or other data block or data unit) can be used to store host data. At operation 640, the processing logic receives host data from the host system. For example, the host system can provide data to be stored at the memory component. In some embodiments, the host data may be data that is not manipulated by machine learning operations. In the same or alternative embodiments, the host data may be a combination of data that is not operated by a machine learning operation and other data that will be input to the machine learning operation. At operation 650, the processing logic stores the host data from the host system at the remaining portion of the memory component that is not allocated to the machine learning operation. For example, host data can be stored in a memory component that is also performing machine learning operations on other data. In some embodiments, the host system may specify whether specific data will be input for machine learning operations or will not be input for machine learning operations. If the data is to be input for machine learning operations, the received data may be stored in an area or part of the memory component that will store input data for machine learning operations. For example, as previously described with respect to FIG. 5, data may be stored at an area of a memory component that has been allocated to store input data for machine learning operations. Otherwise, if the data will not be input for machine learning operations, then the data can be stored elsewhere at the memory component.Thus, the memory component can contain internal logic to perform machine learning operations. The internal logic may be implemented in the memory cell of the memory component. The same memory component can further store host data. The host data can contain input data used for machine learning operations, and other data that is not input data or used with machine learning operations.Figure 7 is a flowchart of an example method 700 for providing an indication of the capacity of a memory component to a host system based on a machine learning model in accordance with some embodiments. The method 700 may be executed by processing logic, which may include hardware (e.g., processing device, circuit system, dedicated logic, programmable logic, microcode, device hardware, integrated circuit, etc.), software (e.g., on the processing device Instructions to run or execute), or a combination thereof. In some embodiments, the method 700 is executed by the machine learning operation component 113 of FIG. 1. Although shown in a specific order or order, unless otherwise specified, the order of the processes can be modified. Therefore, the illustrated embodiments should be understood as examples only, and the illustrated processes may be performed in a different order, and some processes may be performed in parallel. In addition, one or more processes may be omitted in various embodiments. Therefore, not all processes are required in every embodiment. Other process flows are also possible.As shown in FIG. 7, at operation 710, the processing logic allocates a portion of the memory component for machine learning operations based on the machine learning model. For example, the memory cells of the memory component can be configured or programmed based on the nodes, edges, and weights specified by the machine learning model. Thus, a portion of the memory cell of the memory component may be allocated to be programmed or configured for machine learning operations. In operation 720, the processing logic provides an indication of the capacity of the remaining portion of the memory component to the host system based on the machine learning model. For example, the capacity of a memory component that stores host data can be determined. The capacity for storing host data may be based on the difference between the capacity of the memory component when no machine learning operation is implemented in the memory component and the capacity of the memory component when the machine learning operation is implemented in the memory component. In some embodiments, the capacity can be based on the amount of data that can be stored at the memory unit for the total number of memory components when no machine learning operations are implemented at the memory unit and the number of memory units that can be stored for the number of memory units used to implement machine learning operations. To determine the difference between the amount of data. The indication may be provided by the memory component or the memory subsystem controller of the memory subsystem containing the memory component. In some embodiments, the indication may specify the capacity of the memory component (or memory subsystem) to store host data not used by machine learning operations, the capacity of the memory component to store input data for machine learning operations, and the storage component to store machine learning operations. The output data capacity, and a part of the memory component used to implement machine learning operations.At operation 730, the processing logic receives an instruction to change the machine learning model of the machine learning operation. In some embodiments, the host system may specify that a different machine learning model will be used during the machine learning operation. For example, a new neural network or other such new machine learning model can be implemented. In some embodiments, the host may indicate that a new machine learning model will be used when a different analysis or processing of input data or a different type or classification of input data will be used for machine learning operations. Different numbers of memory cells can be programmed or configured based on the new machine learning model. For example, a part of the memory unit of the memory component may be configured or programmed to perform machine learning operations based on the machine learning model. The machine learning operation can then be based on the new machine learning model. Another part of the memory unit of the memory component can then be configured to be programmed to perform machine learning operations based on the new machine learning model.In some embodiments, more memory cells or fewer memory cells may be configured or programmed to implement machine learning operations using new machine learning models relative to previous machine learning models. In the same or alternative embodiments, the memory unit used to implement the machine learning operation using the new machine learning model may be a different memory unit than the memory unit used to implement the machine learning operation using the previous machine learning model. For example, if more memory units will be used for the new machine learning model, the memory units used to configure the previous machine learning model and the additional memory units can be used to implement machine learning operations using the new machine learning model. Thus, the memory unit used to implement machine learning operations using the new machine learning model may include the same memory unit used to implement machine learning operations using the previous machine learning model or the memory unit used to implement machine learning using the previous machine learning model. The subset of memory cells that are operated on. In some embodiments, the memory unit configured or programmed to implement the new machine learning model may be different from the memory unit used to implement the previous machine learning model. For example, a different group of memory cells can be used to implement machine learning operations using the new machine learning model compared to the memory cells used for the previous machine learning model. In some embodiments, different groups of memory cells may be memory cells allocated to store host data. Thus, the group of memory units used to store host data and the group of memory units used to implement the machine learning model can be changed between groups of memory units, so that machine learning operations do not continuously utilize the same group of memory units. Implement.At operation 740, the processing logic allocates another portion of the memory component for machine learning operations based on the changed machine learning model in response to receiving an instruction. For example, different numbers of memory cells can be programmed or configured to implement machine learning operations using the changed machine learning model. In some embodiments, the memory unit implementing the previous machine learning model may be configured or programmed to no longer implement the previous machine learning model. For example, the memory unit implementing the previous machine learning model can be configured or programmed to no longer implement the previous machine learning model, and then the memory unit can be configured or programmed to implement the new machine learning model. Thus, because a different number of memory units can be used to implement a new machine learning model, the capacity of the memory component to store host data can also be changed. For example, if the new machine learning model specifies more nodes than the previous machine learning model, more memory units can be used to implement the new machine learning model. At operation 750, the processing logic provides the host system with another indication of the capacity of the remaining portion of the memory component based on the changed machine learning model. For example, the indication may specify the remaining capacity of the memory component that can be used to store host data when the internal logic of the memory component (e.g., other memory cells) is configured or programmed to implement a new machine learning model. The remaining capacity of the memory component may be different from the previous remaining capacity of the memory component when the previous machine learning model was implemented.FIG. 8 illustrates machine learning operation components implemented in the memory subsystem controller of the memory subsystem 800 according to some embodiments of the present disclosure.As shown in FIG. 8, the machine learning operation component 113 may be implemented by a memory subsystem controller 815 corresponding to the memory subsystem controller 115. For example, the processor or other circuitry of the memory subsystem controller 815 may implement the operations of the machine learning operation component 113 as described herein. Thus, the memory subsystem can contain internal logic to perform machine learning operations. The internal logic may correspond to the memory subsystem controller. For example, the memory subsystem controller may include functionality to perform machine learning operations or neural network processors. Examples of this machine learning or neural network processor include (but are not limited to) multiplication and accumulation sub-operations as described herein.In operation, input data for machine learning operations may be stored at the memory component 830 and/or the memory component 840. In some embodiments, the memory components 830 and 840 are non-volatile memory components, volatile memory components, or a mixture of one or more non-volatile memory components and one or more volatile memory components. The machine learning operation component 113 may receive an instruction to perform a machine learning operation from the host system. In response to receiving the instruction, the machine learning operation component 113 may retrieve input data from one of the memory components 830 and 840. For example, the machine learning operation component 113 can recognize specific input data stored at one of the memory components, and can retrieve the data from the corresponding memory component. Subsequently, the machine learning operation component 113 at the memory subsystem controller 815 can perform a machine learning operation. For example, the multiplication and accumulation sub-operations and any other machine learning sub-operations can be performed by the machine learning operation component 113 inside the memory subsystem controller 815. In some embodiments, the machine learning model used for machine learning operations may be stored at the memory subsystem controller 815 or retrieved from one of the memory components 830 and 840. The internal logic of the memory subsystem controller 815 may be configured based on a machine learning model. In addition, the output of the machine learning operation may be transmitted back to the host system that has requested the machine learning operation, and/or the output may be stored in one of the memory components 830 and 840. For example, the output may be stored at the same memory component from which input data is retrieved. In some embodiments, the output may be stored at a specific location in a specific memory component used to store the output data of the machine learning operation.Although not shown, in some embodiments, the machine learning operation component 113 may be implemented by another integrated circuit separate from the memory subsystem controller 815. For example, another integrated circuit coupled with the memory subsystem controller and the memory components 830 and 840 via the internal bus or interface of the memory subsystem controller can perform the functionality of the machine learning operation component 113.FIG. 9 illustrates a machine learning operation component implemented in one or more memory components of the memory subsystem 900 according to some embodiments of the present disclosure.As shown in FIG. 9, the machine learning operation component 113 may be implemented by one or more memory components 930 and 940. For example, machine learning operations or neural network accelerators can be executed inside the memory component. In some embodiments, the memory component may be a volatile memory component and/or a non-volatile memory component. In the same or alternative embodiments, the internal logic used to perform machine learning operations or neural network accelerators can be executed by any number or combination of volatile memory components and non-volatile memory components.In operation, the memory subsystem controller 915 may receive a request from the host system to perform a machine learning operation using input data. For example, the host system can specify a specific machine learning model to be used with specific input data. The memory subsystem controller 915 can identify a specific memory component that currently stores specific input data. In addition, the memory subsystem controller 915 may transmit a request to perform a machine learning operation to the machine learning operation component 113 at the identified memory component. For example, the request may specify the input data and the machine learning model that will be used for the machine learning operation. The machine learning operation component 113 may then configure the internal logic of the memory component to perform a machine learning operation on the input data to generate output data. The output data may be stored at the memory component and/or provided back to the memory subsystem controller 915 for transmission back to the host system.In some embodiments, the machine learning operation performed by the internal logic of the memory component may be certain sub-operations of the machine learning operation. For example, in some embodiments, the internal logic of the memory component may have limited processing power and may perform sub-groups of sub-operations of machine learning operations. In some embodiments, internal logic can perform multiplication and accumulation sub-operations. As previously described, machine learning operations may correspond to multiple layers of nodes associated with multiplication and accumulation sub-operations. The internal logic of the memory component can perform sub-operations of a sub-group of layers of machine learning operations to generate intermediate results, which are then passed back to the memory subsystem controller 915 (or a separate integrated circuit), the memory subsystem The controller may perform further machine learning operations on the intermediate data based on the final layer of the machine learning model to generate output data. In some embodiments, the intermediate results can be transmitted back to the host system so that the host system can perform further machine learning operations on the intermediate data based on the final layer of the machine learning model to generate output data.In some embodiments, different parts of the machine learning operation may be implemented in the internal logic of different memory components. For example, if the machine learning model contains a larger number of nodes or layers, compared to the case where the machine learning model contains a smaller number of nodes or layers, the internal logic of more memory components can be configured to implement machine learning operations. different section.Figure 10 is a flowchart of an example method for performing a portion of a machine learning operation at one or more memory components of a memory subsystem in accordance with some embodiments. The method 1000 may be executed by processing logic, which may include hardware (e.g., processing device, circuit system, dedicated logic, programmable logic, microcode, hardware of the device, integrated circuit, etc.), software (e.g., on the processing device Instructions to run or execute), or a combination thereof. In some embodiments, the method 1000 is executed by the machine learning operation component 113 of FIG. 1. Although shown in a specific order or order, unless otherwise specified, the order of the processes can be modified. Therefore, the illustrated embodiments should be understood as examples only, and the illustrated processes may be performed in a different order, and some processes may be performed in parallel. In addition, one or more processes may be omitted in various embodiments. Therefore, not all processes are required in every embodiment. Other process flows are also possible.As shown in FIG. 10, at operation 1010, the processing logic receives an instruction to perform a machine learning operation at the memory subsystem. For example, the host system may provide a request for processing input data stored in the same memory subsystem for the neural network accelerator inside the memory subsystem. At operation 1020, the processing logic configures one or more memory components of the memory subsystem to perform part of a machine learning operation. For example, as previously described, the internal logic of the memory component may perform sub-operations of machine learning operations. The memory cell, digital logic, or resistor array of one or more memory components may be configured to perform multiplication and accumulation sub-operations based on a machine learning model specified by the host system. In some embodiments, the memory component that also stores input data may be a memory component configured to implement machine learning operations. At operation 1030, the processing logic receives the result of the part of the machine learning operation from the one or more memory components. For example, the result of a sub-operation of a machine learning operation performed by the internal logic of one or more memory components may be received. The result may be the output of the multiplication and accumulation sub-operations. In some embodiments, the sub-operations performed by the internal logic may be part of the layer of the machine learning model corresponding to the machine learning operation. For example, the result of part of the machine learning operation may be intermediate data of a part of the layer of the machine learning model. The intermediate data may be input data for the next part of the layer of the machine learning model that has not been executed by the internal logic of the memory component. At operation 1040, the processing logic performs the remainder of the machine learning operation based on the received results from the one or more memory components. For example, the remaining sub-operations of the remaining layers can be executed by the internal logic of the memory subsystem controller or a separate integrated circuit to generate output data for machine learning operations. In some embodiments, intermediate data can be transmitted back to the host system and the host system can perform the remainder of the machine learning operation.Thus, the memory component of the memory subsystem can perform part of the machine learning operation. Part of the results of the machine learning operation can be transmitted back to the memory subsystem controller (or a separate integrated circuit in the host system or the memory subsystem) to use the data returned from the memory component to complete the machine learning operation.Aspects of the present disclosure further relate to using the bus to transmit data between the host system and the memory component or memory subsystem. The conventional memory subsystem may include or may be coupled to a bus for transmitting data between the host system and the conventional memory subsystem. For example, the bus can be used to transmit requests (eg, read operations, write operations, etc.) from the host system to the memory subsystem, and transmit data from the memory subsystem to the host system. Another bus can be used to transmit data between the host system and the machine learning processor (e.g., neural network processor). For example, the host system can use a machine learning processor to transmit data on a bus, while the host system uses another bus to transmit data using a conventional memory subsystem.As previously described, the memory component or memory subsystem may contain functional internal logic to perform machine learning operations. The same memory component or memory subsystem may contain functionality to store data of the host system separate from machine learning operations. In some embodiments, the functionality of the machine learning operation implemented by the internal logic of the memory component or memory subsystem may be referred to as a machine learning space, and the memory component or memory subsystem used to store host data separated from the machine learning operation The memory unit can be called a memory space. Thus, a single memory component or memory subsystem may contain functionality for both the machine learning space and the memory space. Therefore, a single bus cannot be used to transmit data only to the memory space of the memory component or the memory subsystem.Aspects of the present disclosure solve the above and other deficiencies by transmitting data for the machine learning space and memory space of the memory component or memory subsystem on a single bus, or transmitting data for the memory component or memory subsystem on multiple buses Machine learning space and memory space data. For example, a single bus can be virtualized to transmit data between the host system and the memory component or memory subsystem. In some embodiments, the host system may transmit operations (eg, read requests, write requests, etc.) to the memory component or memory subsystem on the bus. Based on the type of operation or the memory address location of the operation, the operation from the host system can be transmitted to be executed at the machine learning space or the memory space of the memory component or the memory subsystem. Thus, operations transmitted on a single bus from the host system can be provided to the memory space or the machine learning space.In some embodiments, multiple buses may be used to transmit data between memory components or memory subsystems including memory space and machine learning space. For example, one bus can be used to transmit data between the host system and the memory space, and another bus can be used to transmit data between the host system and the machine learning space. Thus, the memory component or the memory subsystem may include multiple buses each for separately transmitting data between the memory space and the machine learning space.The advantages of the present disclosure include (but are not limited to) reducing the complexity of the design of the memory component or the memory subsystem when a single bus is used to transmit data for the machine learning space and the memory space within the memory component or the memory subsystem. For example, a single bus can cause fewer connections and fewer wiring than multiple buses that couple a host system with a memory component or memory subsystem. In addition, the management of data or operations for the memory space and the machine learning space transmitted on the bus can lead to improved performance of the memory space and the machine learning space. Using separate buses for the memory space and the machine learning space can achieve faster transmission of data for each of the memory space and the machine learning space, thereby achieving improved performance for the functionality of the memory space and the machine learning space.Figure 11 illustrates an example memory component 1110 and a memory subsystem 1120 according to some embodiments of the present disclosure, which have a single bus for transmitting data for the memory space and the machine learning space. The machine learning operation component 113 may be used to transmit and receive data for the memory space and the machine learning space of each of the memory component 1110 and the memory subsystem 1120.As shown in FIG. 11, the memory component 1110 may include a machine learning operation component 113 to manage the reception and transmission of data for the memory space 1111 and the machine learning space 1112 of the memory component 1110 on a single bus 1113. As previously described, the memory space 1111 may be a memory unit of the memory component 1110 or any other storage unit that can be used to store host data from the host system. In some embodiments, the host data may be data that will not be used by machine learning operations of the machine learning space 1112. The machine learning space 1112 may be the internal logic of the memory component 1110. For example, as previously described, the internal logic may correspond to other memory cells (or any other types of memory cells) of the memory component 1110 to be configured or programmed based on the definition of the machine learning model so that machine learning operations can be performed at the memory cells ).In addition, the memory subsystem 1120 may include a machine learning operation component 113 to manage the reception and transmission of data for the memory space 1121 and the machine learning space 1122 on a single bus 1123. For example, the memory space 1121 may be a memory unit of one or more memory components used to store host data as previously described, and the machine learning space 1122 may be one or more memory units used to perform one or more machine learning operations. The internal logic of the memory component. In some embodiments, the internal logic of the machine learning space 1122 may be included in the controller of the memory subsystem 1120 as previously described. In some embodiments, the memory component 1110 and the memory subsystem 1120 may include a decoder that can be used to receive data from the corresponding bus and decode the received data, and then transmit the decoded data to the memory Space and/or machine learning space. For example, the decoder may decode a logical address specified by an operation provided via a bus to a physical address located at one of the memory space or the machine learning space. In some embodiments, each of the memory space and the machine learning space may contain a separate decoder.In addition, the internal bus can be used to couple the machine learning space and the memory space. For example, the input data and the machine learning machine learning model can be transmitted from the memory space to the machine learning space via the internal bus, and the output data from the machine learning space can be transmitted to the memory space via the internal bus. Thus, the memory component or the memory subsystem may include a bus for transmitting data between the host system and the memory component or the memory subsystem, and for transmitting data between the memory space of the memory component or the memory subsystem and the machine learning space. Internal bus.In operation, data can be transmitted between the host system and the memory component or memory subsystem on a single bus. In some embodiments, the bus may refer to an interface for transmitting one or more signals of data between at least two devices (e.g., a host system and a memory component or a memory subsystem). The memory component or the memory subsystem may receive requests for operations of the memory space and the machine learning space via a single bus. Examples of operations in the memory space include (but are not limited to) read operations, write operations, and erase operations associated with host data. Examples of operations used in the machine learning space include (but are not limited to) the provision of input data for machine learning operations, the provision of definitions of machine learning models, and the initiation or execution of specific definitions based on machine learning models with specific input data Commands for machine learning operations, requests for output data or results of receiving machine learning operations, etc. In some embodiments, the operation or machine learning space can be any operation that interacts with machine learning operations. In this way, different operations for different functionalities (for example, a memory space or a machine learning space) inside the memory component or the memory subsystem can be received on the same virtualized bus. As described in further detail below, based on the type of the received operation or another characteristic or attribute of the operation or data, the operation or data may be transmitted or provided to a memory space or a machine learning space.Figure 12 is a flowchart of an example method 1200 for transmitting a requested operation to a memory space or a machine learning space based on the type of operation according to some embodiments. The method 1200 may be performed by processing logic, which may include hardware (e.g., processing device, circuit system, dedicated logic, programmable logic, microcode, device hardware, integrated circuit, etc.), software (e.g., on the processing device Instructions to run or execute), or a combination thereof. In some embodiments, the method 1200 is executed by the machine learning operation component 113 of FIG. 1. Although shown in a specific order or order, unless otherwise specified, the order of the processes can be modified. Therefore, the illustrated embodiments should be understood as examples only, and the illustrated processes may be performed in a different order, and some processes may be performed in parallel. In addition, one or more processes may be omitted in various embodiments. Therefore, not all processes are required in every embodiment. Other process flows are also possible.As shown in Figure 12, at operation 1210, the processing logic receives a request to access a memory component or a memory subsystem. The request can be provided by the host system. For example, the request may be an operation to access a memory space or a machine learning space of a memory component or a memory subsystem. The request may be received via the bus or other such interface for transmitting data between the host system and the memory component and/or memory subsystem. At operation 1220, processing logic determines the type of operation specified by the request. For example, the request may specify an operation to be performed at the memory space or machine learning space of the memory component or memory subsystem. For example, the request may be an operation to write data to or read data from the memory component or the memory subsystem. The request may specify any operation that can be used to access or modify host data stored at the memory space of the memory component or memory subsystem. In addition, the request can be to access input data, output data, or machine learning definition, update input data, output data, or machine learning definition, or interact with input data, output data, or machine learning definition, and/or in a memory component or memory sub The internal logic of the system implements or executes machine learning operations. Thus, the operation may be the first type of operation corresponding to the operation for the memory space, or the second type of operation corresponding to the operation for the machine learning space. The operation can specify the type of action to be performed at the memory component or the memory subsystem.At operation 1230, the processing logic transmits the request to the memory space or the machine learning space of the memory component or the memory subsystem based on the type of operation specified by the request. For example, the operation provided by the host system may be provided to the machine learning space or the memory space based on whether the operation is generally used to access the memory space or the machine learning space. For example, a memory component or memory subsystem may include a data structure that identifies different types of operations that can be performed by the memory component or memory subsystem and that can be received from the host system. Each type of operation can be assigned to a memory space or a machine learning space. When a request for an operation assigned to the memory space is received via the bus, then the operation can be transmitted to the memory space or executed at the memory space. In some embodiments, the operation may be transmitted to the decoder or another component used to decode the operation for the memory space. Otherwise, when a request for an operation assigned to the machine learning space is received via the bus, then the operation can be transmitted to the machine learning space or executed at the machine learning space. In some embodiments, the operation may be transmitted to the decoder or another component used to decode the operation for the machine learning space.Thus, different types of operations can be transmitted to the memory component or memory subsystem via a single bus. Operations can be transmitted to memory space or machine learning space based on the type of operation.Figure 13 is a flowchart of an example method 1300 for providing a requested operation to a memory space or a machine learning space based on a memory address, according to some embodiments. The method 1300 may be performed by processing logic, which may include hardware (e.g., processing device, circuit system, dedicated logic, programmable logic, microcode, device hardware, integrated circuit, etc.), software (e.g., on the processing device Instructions to run or execute), or a combination thereof. In some embodiments, the method 1300 is performed by the machine learning operation component 113 of FIG. 1. Although shown in a specific order or order, unless otherwise specified, the order of the processes can be modified. Therefore, the illustrated embodiments should be understood as examples only, and the illustrated processes may be performed in a different order, and some processes may be performed in parallel. In addition, one or more processes may be omitted in various embodiments. Therefore, not all processes are required in every embodiment. Other process flows are also possible.As shown in Figure 13, at operation 1310, the processing logic receives a request to perform an operation at the memory component or memory subsystem. As previously described, the operation may be an operation used in a memory space (for example, a read operation, a write operation, an erase operation, etc.) or an operation used in a machine learning space (for example, an operation associated with a machine learning operation) ). At operation 1320, processing logic determines the memory address specified by the operation. For example, the operation may specify a memory address, or other such location of a data block or other such logical or physical data unit of a memory component or memory subsystem. In some embodiments, the memory address may be the location of the physical data block of the memory component in the memory component or the memory subsystem.At operation 1330, the processing logic determines whether the memory address corresponds to a memory space or a machine learning space. For example, a range of memory addresses can be assigned to a memory space, and another range of memory addresses can be assigned to a machine learning space. Each range of memory addresses may contain a unique set of memory addresses such that the memory address is assigned to only one of the memory space or the machine learning space. At operation 1340, the processing logic provides operations for execution at the determined memory space or machine learning space. For example, operations may be forwarded to a decoder or other such components of a memory component or a memory subsystem used to decode operations for the corresponding memory space or the corresponding machine learning space. If the memory address of the operation is included in the range of the memory address assigned to the memory space, the operation can be provided to the memory space. Otherwise, if the memory address of the operation is included in the range of memory addresses assigned to the machine learning space, the operation can be provided to the machine learning space.Figure 14 shows an example memory component 1410 and a memory subsystem 1420 according to some embodiments of the present disclosure, which have separate buses for transmitting data for the memory space and the machine learning space. Each of the memory component 1410 and the memory subsystem 1420 may include a machine learning operation component 113 to transmit and receive data for the memory space and the machine learning space on a separate bus.As shown in FIG. 14, the memory component 1410 may include a machine learning operation component 113, a memory space 1411 and a machine learning space 1412. The memory component 1410 may be coupled to a bus or interface 1413 and 1414. Each bus 1413 and 1414 can be used to transmit data for the corresponding memory space 1411 and machine learning space 1412. For example, the bus 1413 can be used to transmit data between the host system and the memory space 1411, and the bus 1414 can be used to transmit data between the host system and the machine learning space 1412. As such, each bus can be used to transmit data between the host system and one of the memory space 1411 and the machine learning space 1412, and not transmit data for the other of the memory space 1411 and the machine learning space 1412. Similarly, the memory subsystem 1412 may include a machine learning operation component 113, which may forward or provide data received via the buses 1423 and 1425 to the memory space 1421 or the machine learning space 1422. For example, the bus 1423 can be used to transmit data between the host system and the memory space 1421, and the bus 1425 can be used to transmit data between the host system and the machine learning space 1422.In addition, the internal bus can be used to couple the machine learning space of the memory component 1410 or the memory subsystem 1420 with the memory space. In this way, the memory component or the memory subsystem may include a separate bus for transmitting data between the host system and the machine learning space and the memory space of the memory component or the memory subsystem, as well as the memory used in the memory component or the memory subsystem. The internal bus that transmits data between the space and the machine learning space.In operation, data can be transmitted between the host system and the memory component or memory subsystem on two buses, where each bus is dedicated to one of the memory space or the machine learning space inside the memory component or memory subsystem. For example, data received via the bus 1423 may be forwarded or provided to the memory space 1421, and data received via the bus 1425 may be forwarded or provided to the machine learning space 1422.15 is a flowchart of an example method for performing operations in an order based on priority for machine learning operations according to some embodiments of the present disclosure. The method 1500 may be performed by processing logic, which may include hardware (e.g., processing device, circuit system, dedicated logic, programmable logic, microcode, device hardware, integrated circuit, etc.), software (e.g., on the processing device Instructions to run or execute), or a combination thereof. In some embodiments, the method 1500 is performed by the machine learning operation component 113 of FIG. 1. Although shown in a specific order or order, unless otherwise specified, the order of the processes can be modified. Therefore, the illustrated embodiments should be understood as examples only, and the illustrated processes may be performed in a different order, and some processes may be performed in parallel. In addition, one or more processes may be omitted in various embodiments. Therefore, not all processes are required in every embodiment. Other process flows are also possible.As shown in Figure 15, at operation 1510, processing logic determines a set of operations for the memory space. The set of operations may be operations that have been received by the memory component or the memory subsystem. In some embodiments, the set of operations may be stored at the buffer memory of the memory component or memory subsystem. For example, the set of operations may be operations that have been provided by the host system and are to be executed at the memory component or the memory subsystem. The group operations can be received via a virtualized bus or a separate bus as previously described. In addition, the set of operations may include (but are not limited to) read operations of host data, write operations of host data, or be stored at a memory component or a memory subsystem and be separated from machine learning operations or will not be used by machine learning operations Erase operation of the host data. Each of the operations may be an operation to be performed at the memory space of the memory component or memory subsystem. In some embodiments, the memory subsystem controller of the memory subsystem or the local controller of the memory component may receive the set of operations. In the same or alternative embodiments, the set of operations may be stored at the buffer of the memory subsystem or memory component.At operation 1520, the processing logic determines another set of operations for the machine learning space. For example, another set of operations may be received from the host system, which operations will be performed for machine learning operations as previously described. Another set of operations may be received via a virtualized bus or another bus separate from the bus that provides the set of operations for the memory space. Thus, operations for the memory space and machine learning space can be received by the memory component or the memory subsystem. At operation 1530, the processing logic receives an indication of the priority of the machine learning operation associated with the machine learning space. The priority can be received from the host system. For example, the host system may transmit a message indicating the priority level for machine learning operations that are being executed or will be executed at the machine learning space. In some embodiments, the priority may be a number or other such value that specifies the importance or performance requirements of the machine learning operation. For example, the priority may have a high value to indicate a high priority level for machine learning operations, or a low value to indicate a low priority level for machine learning operations. Performance requirements may specify the maximum amount of time the machine learning operation will execute to process input data, the rate at which the input data will be processed by the machine learning operation, the amount of time that may elapse before the output data of the machine learning operation will be provided to the host system, and so on. In some embodiments, the priority for machine learning operations may be based on the input data provided to the machine learning operations. For example, if the input data has a high priority level, then the machine learning operation may also have a high priority level when the machine learning model is applied to the input data. Otherwise, if the input data does not have a high priority level, then the machine learning operation does not have a high priority level when the machine learning model is applied to the input data.At operation 1540, the processing logic determines the order of operations from the group for the memory space and the machine learning space based on the priority for the machine learning operation. For example, operations from the group for the memory space and another group for the machine learning space may be initially ordered as different operations are received at the memory component or memory subsystem. Thus, the operations may be based on the initial order of when each corresponding operation has been received at the memory component or memory subsystem. For example, operations received before another operation may be specified in the order before the other operation. The order can then be changed based on the priority for machine learning operations. For example, if the host system indicates that the machine learning operation has a high priority or other such designation, the operations for the machine learning space can be reordered to precede the operations for the memory space.At operation 1550, the processing logic performs operations in the determined order to access the memory space and the machine learning space. For example, the operations may be executed or transmitted to the memory space and the machine learning space based on the reordering of the operations, so that the operations specified earlier are executed or transmitted before the operations specified later.Thus, the order of operations for the memory space and the machine learning space may be changed based on the priority of the machine learning operations to be performed at the machine learning space. If the priority of the machine learning operation is high, one or more operations used for the machine learning operation may be placed earlier in order than other operations used for the memory space. For example, operations for machine learning operations can be executed earlier because machine learning operations are associated with higher priority or importance. Otherwise, if the priority of the machine learning operation is low, the operation for the memory space and the operation for the memory space can be ordered as the corresponding operation is received at the memory component or the memory subsystem.Figure 16A illustrates a series of operations that have been received for the memory space and the machine learning space of a memory component or a memory subsystem according to some embodiments of the present disclosure. In some embodiments, the memory component or the machine learning operation component 113 of the memory subsystem may receive a series of operations for the memory space and the machine learning space. The series of operations can be ordered sequentially as each operation is received by the memory component or memory subsystem.As shown in Figure 16A, operations 1600 may be initially sequenced as operations are received from the host system. For example, operation A 1610 may be initially received from the host system, followed by operation B 1620, operation C 1630, operation D 1640, and operation X 1650. Operations A to D can specify memory space, and operation X 1650 can specify machine learning space. In some embodiments, the host system may provide an indication of whether the corresponding operation will be executed in the memory space or the machine learning space. In the same or alternative embodiments, the type of operation or the memory address of the operation can be used to determine whether the operation will be performed at the memory space or the machine learning space, as previously described.Figure 16B illustrates a series of operations that have been based on prioritization for machine learning operations according to some embodiments of the present disclosure. In some embodiments, the memory component or the machine learning operation component 113 of the memory subsystem may sort a series of operations for the memory space and the machine learning space based on the priority for the machine learning operation.As shown in FIG. 16B, operation 1600 may be reordered based on the priority of the machine learning operation. For example, the host system may indicate that the priority for machine learning operations is high or higher than the priority for operations for memory space. Thus, the operation X 1650 received later can be reordered so that the operation X 1650 can be executed before the other operations A to D and/or can be transmitted to the machine learning space before the other operations A to D are transmitted to the memory space.Figure 17 is a flowchart of an example method for changing the performance of a machine learning operation based on a performance metric associated with a memory space in accordance with some embodiments. The method 1700 may be performed by processing logic, which may include hardware (e.g., processing device, circuit system, dedicated logic, programmable logic, microcode, device hardware, integrated circuit, etc.), software (e.g., on the processing device Instructions to run or execute), or a combination thereof. In some embodiments, the method 1700 is performed by the machine learning operation component 113 of FIG. 1. Although shown in a specific order or order, unless otherwise specified, the order of the processes can be modified. Therefore, the illustrated embodiments should be understood as examples only, and the illustrated processes may be performed in a different order, and some processes may be performed in parallel. In addition, one or more processes may be omitted in various embodiments. Therefore, not all processes are required in every embodiment. Other process flows are also possible.As shown in Figure 17, at operation 1710, the processing logic receives host data at the memory component or memory subsystem that is performing the machine learning operation. For example, the host system may provide a read operation, a write operation, or an erase operation to be performed at the memory component or the memory subsystem while the internal logic of the memory component or the memory subsystem is performing a machine learning operation. At operation 1720, the processing logic determines the performance metric associated with the memory space of the memory component or memory subsystem. The performance metric may be based on the rate at which read, write, or erase operations are performed at the memory space while the internal logic of the machine learning space is performing machine learning operations. For example, in some embodiments, the internal logic of the machine learning space can affect the performance of the memory space. In some embodiments, the type of machine learning model used for machine learning operations at the machine learning space, the amount of input data, etc. may change the rate at which operations are performed at the memory space. Since larger machine learning machine learning models or more operations are performed for the machine learning space, fewer resources of the memory component or memory subsystem can be used for the memory space. At operation 1730, the processing logic determines whether the performance metric associated with the memory space has met the threshold performance metric. For example, when the performance metric is equal to or less than the threshold performance metric, the performance metric can be regarded as meeting the threshold performance metric. When the performance metric exceeds the threshold performance metric, the performance metric may be considered as not meeting the threshold performance metric. As previously described, the performance metric may correspond to the execution rate of operations for the memory space. For example, the performance metric may be the latency of a write operation, a read operation, or a combination of a write operation and a read operation performed on the memory space. When the delay of the operation exceeds the threshold, the performance metric may be determined as not meeting the threshold of performance, and when the delay of the operation is equal to or less than the threshold, the performance metric may be determined to meet the threshold of performance.At operation 1740, the processing logic changes the performance of the machine learning operation performed at the memory component or memory subsystem in response to determining that the performance metric does not meet the threshold performance metric. For example, if the latency of the operation exceeds the threshold latency metric, the machine learning operation can be changed so that the latency of the operation for the memory space can be lower than the threshold latency metric. In some embodiments, the performance of machine learning operations can be changed by reducing the execution rate of machine learning operations, reducing the rate at which input data can be provided to machine learning operations, using different machine learning models (for example, using less Machine learning model of sub-operations) and so on. Otherwise, if in response to determining that the performance metric satisfies the threshold performance metric, the execution of the machine learning operation is performed at the memory component or the memory subsystem, then the machine learning operation does not change.Figure 18 illustrates an example machine of a computer system 1800 within which a set of instructions can be executed to cause the machine to perform any one or more of the methods discussed herein. In some embodiments, the computer system 1800 may correspond to a host system (for example, the host system 120 of FIG. 1) that includes, is coupled to, or utilizes a memory subsystem (for example, the memory subsystem 110 of FIG. 1) or It can be used to perform operations of the controller (for example, to execute an operating system to perform operations corresponding to the machine learning operation component 113 of FIG. 1). In alternative embodiments, the machine may be connected (eg, networked) to other machines in the LAN, intranet, extranet, and/or the Internet. The machine can be used as a peer machine in a peer-to-peer (or distributed) network environment or as a server or client machine in a cloud computing infrastructure or environment while being in the capacity of a server or client machine in a client-server network environment operating.The machine can be a personal computer (PC), tablet PC, set-top box (STB), personal digital assistant (PDA), cellular phone, network equipment, server, network router, switch or bridge, digital or non-digital circuit system, or capable of executing Any machine that specifies the set of instructions (sequentially or otherwise) for actions to be performed by this machine. In addition, although a single machine is shown, the term "machine" should also be understood to include machines that individually or collectively execute one (or more) instruction sets to perform any one or more of the methods discussed herein. Any collection.The example computer system 1800 includes a processing device 1802, a main memory 1804 (for example, read only memory (ROM), flash memory, dynamic random access memory (DRAM), such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), The static memory 1806 (for example, flash memory, static random access memory (SRAM), etc.) and the data storage system 1818 communicate with each other via a bus 1830.The processing device 1802 represents one or more general processing devices, such as a microprocessor, a central processing unit, and so on. More specifically, the processing device may be a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets , Or a processor that implements a combination of instruction sets. The processing device 1802 may also be one or more dedicated processing devices, such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), a network processor, and so on. The processing device 1802 is configured to execute instructions 1826 for performing the operations and steps discussed herein. The computer system 1800 may further include a network interface device 1808 to communicate on the network 1820.The data storage system 1818 may include a machine-readable storage medium 1824 (also referred to as a computer-readable medium) on which one or more instruction sets 1826 or software embodying any one or more methods or functions described herein are stored. The instructions 1826 may also completely or at least partially reside in the main memory 1804 and/or the processing device 1802 during the execution of the instructions 1826 by the computer system 1800, and the main memory 1804 and the processing device 1802 also constitute machine-readable storage media. The machine-readable storage medium 1824, the data storage system 1818, and/or the main memory 1804 may correspond to the memory subsystem 110 of FIG. 1.In one embodiment, the instructions 1826 include instructions to implement functionality corresponding to a machine learning operation component (eg, the machine learning operation component 113 of FIG. 1). Although the machine-readable storage medium 1824 is shown as a single medium in the example embodiment, the term "machine-readable storage medium" should be understood to include a single medium or multiple mediums that store the one or more sets of instructions. The term "machine-readable storage medium" should also be understood to include any medium capable of storing or encoding a set of instructions for execution by a machine and causing the machine to perform any one or more of the methods of the present disclosure. Therefore, the term "machine-readable storage medium" should be understood as including (but not limited to) solid-state memory, optical media, and magnetic media.Some parts of the previous detailed description have been presented with respect to the algorithm and symbolic representation of the operation of the data bits in the computer memory. These algorithm descriptions and representations are the most effective way for those skilled in the data processing field to convey the main idea of their work to other technicians in the field. Algorithms are here and generally considered to be self-consistent sequences of operations that produce the desired result. Operations are operations that require physical manipulation of physical quantities. These quantities are usually but not necessarily in the form of electrical or magnetic signals that can be stored, combined, compared, and otherwise manipulated. Sometimes, mainly for general reasons, it has proven convenient to refer to these signals as bits, values, elements, symbols, characters, items, numbers, etc.However, it should be borne in mind that all these and similar terms should be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. The present disclosure may refer to the manipulation and transformation of data expressed as physical (electronic) quantities in the registers and memories of the computer system into computer system memories or registers or other data similarly expressed as physical quantities in other such information storage systems The actions and processes of a computer system or similar electronic computing device.The invention also relates to a device for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such computer programs can be stored in computer-readable storage media, such as (but not limited to) any type of disk (including floppy disk, optical disk, CD-ROM and magneto-optical disk), read only memory (ROM), random access memory ( RAM), EPROM, EEPROM, magnetic or optical card, or any type of medium suitable for storing electronic instructions, each of which is coupled to the computer system bus.The algorithms and displays presented in this article are not essentially related to any particular computer or other device. Various general-purpose systems may be used with programs according to the teachings herein, or it may prove convenient to construct more specialized devices to perform the methods. The structure of a variety of these systems will be presented as set forth in the description below. In addition, the present disclosure is not described with reference to any specific programming language. It should be appreciated that a variety of programming languages may be used to implement the teachings of the present disclosure described herein.The present disclosure may be provided as a computer program product or software, and the computer program product or software may include a machine-readable medium having instructions stored thereon, and the instructions may be used to program a computer system (or other electronic device) to execute according to the present invention. Open process. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (eg, a computer). In some embodiments, the machine-readable (e.g., computer-readable) medium includes a machine (e.g., computer) readable storage medium, such as read-only memory ("ROM"), random access memory ("RAM"), magnetic disk Storage media, optical storage media, flash memory components, etc.In the foregoing specification, the embodiments of the present disclosure have been described with reference to specific example embodiments of the present disclosure. It should be apparent that various modifications can be made to the present disclosure without departing from the broader spirit and scope of the embodiments of the present disclosure as set forth in the appended claims. Therefore, the description and drawings should be viewed in an illustrative sense rather than a restrictive sense. |
A method is provided for manufacturing, the method including processing a first workpiece in a nitride processing step and measuring a thickness of a field oxide feature formed on the first workpiece. The method also includes forming an output signal corresponding to the thickness of the field oxide feature. In addition, the method includes feeding back a control signal based on the output signal to adjust processing performed on a second workpiece in the nitride processing step to adjust a thickness of a field oxide feature formed on the second workpiece toward at least a predetermined threshold value. |
What is claimed: 1. A method of manufacturing, the method comprising:processing a first workpiece in a nitride processing step; measuring a thickness of a field oxide feature formed on the first workpiece; forming an output signal corresponding to the thickness of the field oxide feature; and feeding back a control signal based on the output signal to adjust processing performed on a second workpiece in the nitride processing step to adjust a thickness of a field oxide feature formed on the second workpiece toward at least a threshold value. 2. The method of claim 1, wherein feeding back the control signal based on the output signal to adjust processing performed on the second workpiece in the nitride processing step includes adding fresh chemicals to a chemical bath used in the nitride processing step and, if the chemical bath is substantially full, draining a portion of the chemical bath.3. The method of claim 2, wherein draining the portion of the chemical bath used in the nitride processing step includes determining the portion of the chemical bath to be drained based on the output signal.4. The method of claim 2, wherein adding the fresh chemicals to the chemical bath includes determining an amount of the fresh chemicals based on the output signal.5. The method of claim 2, wherein draining the portion of the chemical bath used in the nitride processing step and adding the fresh chemicals to the chemical bath includes determining the portion of the chemical bath to be drained and determining an amount of the fresh chemicals based on the output signal.6. A method of manufacturing, the method comprising:processing a first workpiece in a nitride processing step; measuring a thickness of a field oxide feature formed on the first workpiece; detecting residual field oxide defects on the first workpiece; forming an output signal corresponding to the thickness of the field oxide feature and the residual field oxide defects; and feeding back a control signal based on the output signal to adjust processing performed on a second workpiece in the nitride processing step to adjust a thickness of a field oxide feature formed on the second workpiece toward at least a threshold value and to reduce residual field oxide defects on the second workpiece. 7. The method of claim 6, wherein feeding back the control signal based on the output signal to adjust processing performed on the second workpiece in the nitride processing step includes adding fresh chemicals to a chemical bath used in the nitride processing step and, if the chemical bath is substantially full, draining a portion of the chemical bath.8. The method of claim 7, wherein draining the portion of the chemical bath used in the nitride processing step includes determining the portion of the chemical bath to be drained based on the output signal.9. The method of claim 7, wherein adding the fresh chemicals to the chemical bath includes determining an amount of the fresh chemicals based on the output signal.10. The method of claim 7, wherein draining the portion of the chemical bath used in the nitride processing step and adding the fresh chemicals to the chemical bath includes determining the portion of the chemical bath to be drained and determining an amount of the fresh chemicals based on the output signal.11. A method of manufacturing, the method comprising:processing a first workpiece in a nitride processing step; measuring thicknesses of a plurality of field oxide features formed on the first workpiece; forming an output signal corresponding to the thicknesses of the plurality of field oxide features; and feeding back a control signal based on the output signal to adjust processing performed on a second workpiece in the nitride processing step to adjust a thickness of a field oxide feature formed on the second workpiece toward at least a predetermined threshold value. 12. The method of claim 11, wherein feeding back the control signal based on the output signal to adjust processing performed on the second workpiece in the nitride processing step includes adding fresh chemicals to a chemical bath used in the nitride processing step and, if the chemical bath is substantially full, draining a portion of the chemical bath.13. The method of claim 12, wherein draining the portion of the chemical bath used in the nitride processing step includes determining the portion of the chemical bath to be drained based on the output signal.14. The method of claim 12, wherein adding the fresh chemicals to the chemical bath includes determining an amount of the fresh chemicals based on the output signal.15. The method of claim 12, wherein draining the portion of the chemical bath used in the nitride processing step and adding the fresh chemicals to the chemical bath includes determining the portion of the chemical bath to be drained and determining an amount of the fresh chemicals based on the output signal.16. A method of manufacturing, the method comprising:processing a first workpiece in a nitride processing step; measuring thicknesses of a plurality of field oxide features formed on the first workpiece; detecting residual field oxide defects on the first workpiece; forming an output signal corresponding to the thicknesses of the plurality of field oxide features and the residual field oxide defects; and feeding back a control signal based on the output signal to adjust processing performed on a second workpiece in the nitride processing step to adjust a thickness of a field oxide feature formed on the second workpiece toward at least a predetermined threshold value and to reduce residual field oxide defects on the second workpiece. 17. The method of claim 16, wherein feeding back the control signal based on the output signal to adjust processing performed on the second workpiece in the nitride processing step includes adding fresh chemicals to a chemical bath used in the nitride processing step and, if the chemical bath is substantially full, draining a portion of the chemical bath.18. The method of claim 17, wherein draining the portion of the chemical bath used in the nitride processing step includes determining the portion of the chemical bath to be drained based on the output signal.19. The method of claim 17, wherein adding the fresh chemicals to the chemical bath includes determining an amount of the fresh chemicals based on the output signal.20. The method of claim 17, wherein draining the portion of the chemical bath used in the nitride processing step and adding the fresh chemicals to the chemical bath includes determining the portion of the chemical bath to be drained and determining an amount of the fresh chemicals based on the output signal.21. A computer-readable, program storage device, encoded with instructions that, when executed by a computer, perform a method for manufacturing a workpiece, the method comprising:processing a first workpiece in a nitride processing step; measuring a thickness of a field oxide feature formed on the first workpiece; forming an output signal corresponding to the thickness of the field oxide feature; and feeding back a control signal based on the output signal to adjust processing performed on a second workpiece in the nitride processing step to adjust a thickness of a field oxide feature formed on the second workpiece toward at least a predetermined threshold value. 22. The device of claim 21, wherein feeding back the control signal based on the output signal to adjust processing performed on the second workpiece in the nitride processing step includes adding fresh chemicals to a chemical bath used in the nitride processing step and, if the chemical bath is substantially full, draining a portion of the chemical bath.23. The device of claim 22, wherein draining the portion of the chemical bath used in the nitride processing step includes determining the portion of the chemical bath to be drained based on the output signal.24. The device of claim 22, wherein adding the fresh chemicals to the chemical bath includes determining an amount of the fresh chemicals based on the output signal.25. The device of claim 22, wherein draining the portion of the chemical bath used in the nitride processing step and adding the fresh chemicals to the chemical bath includes determining the portion of the chemical bath to be drained and determining an amount of the fresh chemicals based on the output signal.26. A computer programmed to perform a method of manufacturing, the method comprising:processing a first workpiece in a nitride processing step; measuring a thickness of a field oxide feature formed on the first workpiece; forming an output signal corresponding to the thickness of the field oxide feature; and feeding back a control signal based on the output signal to adjust processing performed on a second workpiece in the nitride processing step to adjust a thickness of a field oxide feature formed on the second workpiece toward at least a predetermined threshold value. 27. The computer of claim 26, wherein feeding back the control signal based on the output signal to adjust processing performed on the second workpiece in the nitride processing step includes adding fresh chemicals to a chemical bath used in the nitride processing step and, if the chemical bath is substantially full, draining a portion of the chemical bath.28. The computer of claim 27, wherein draining the portion of the chemical bath used in the nitride processing step includes determining the portion of the chemical bath to be drained based on the output signal.29. The computer of claim 27, wherein adding the fresh chemicals to the chemical bath includes determining an amount of the fresh chemicals based on the output signal.30. The computer of claim 27, wherein draining the portion of the chemical bath used in the nitride processing step and adding the fresh chemicals to the chemical bath includes determining the portion of the chemical bath to be drained and determining an amount of the fresh chemicals based on the output signal. |
BACKGROUND OF THE INVENTION1. Field of the InventionThis invention relates generally to semiconductor fabrication technology, and, more particularly, to a method for manufacturing a workpiece.2. Description of the Related ArtThere is a constant drive within the semiconductor industry to increase the quality, reliability and throughput of integrated circuit devices, e.g., microprocessors, memory devices, and the like. This drive is fueled by consumer demands for higher quality computers and electronic devices that operate more reliably. These demands have resulted in a continual improvement in the manufacture of semiconductor devices, e.g., transistors, as well as in the manufacture of integrated circuit devices incorporating such transistors. Additionally, reducing the defects in the manufacture of the components of a typical transistor also lowers the overall cost per transistor as well as the cost of integrated circuit devices incorporating such transistors.The technologies underlying semiconductor processing tools have attracted increased attention over the last several years, resulting in substantial refinements. However, despite the advances made in this area, many of the processing tools that are currently commercially available suffer certain deficiencies. In particular, such tools often lack advanced process data monitoring capabilities, such as the ability to provide historical parametric data in a user-friendly format, as well as event logging, real-time graphical display of both current processing parameters and the processing parameters of the entire run, and remote, i.e., local site and worldwide, monitoring. These deficiencies can engender nonoptimal control of critical processing parameters, such as throughput accuracy, stability and repeatability, processing temperatures, mechanical tool parameters, and the like. This variability manifests itself as within-run disparities, run-to-run disparities and tool-to-tool disparities that can propagate into deviations in product quality and performance, whereas an ideal monitoring and diagnostics system for such tools would provide a means of monitoring this variability, as well as providing means for optimizing control of critical parameters.Among the parameters it would be useful to monitor and control are the field oxide (FOX) thickness and the residual FOX defect count following a nitride stripping and/or etching process step. As consecutive lots of workpieces (such as silicon wafers with various process layers formed thereon) are processed through a nitride stripping and/or etching process step, increasing silicon (Si) concentration in the stripping and/or etching bath causes the FOX also to etch in varying amounts. For example, when hot aqueous phosphoric acid (H3PO4) is used to selectively etch silicon nitride (Si3N4), the Si3N4 etches away fairly steadily, at roughly ten times the initial etch rate of the FOX (SiO2). However, when the H3PO4 bath is fresh and the Si concentration is relatively low, the initial etch rate of the FOX (SiO2) is much faster than the later etch rate of the FOX (SiO2), as the H3PO4 bath ages and the Si concentration increases. This causes the FOX thicknesses to increase with time, as the H3PO4 bath ages and the Si concentration increases. In particular, the FOX thicknesses typically vary from run to run and/or batch to batch, leading to varying device performance and an increased number of residual FOX defects, lowering the workpiece throughput and increasing the workpiece manufacturing costs. In addition, if the Si concentration oversaturates, Si may precipitate, contaminating the workpiece(s) and increasing the number of defects.The present invention is directed to overcoming, or at least reducing the effects of, one or more of the problems set forth above.SUMMARY OF THE INVENTIONIn one aspect of the present invention, a method is provided for manufacturing, the method including processing a first workpiece in a nitride processing step and measuring a thickness of a field oxide feature formed on the first workpiece. The method also includes forming an output signal corresponding to the thickness of the field oxide feature. In addition, the method includes feeding back a control signal based on the output signal to adjust processing performed on a second workpiece in the nitride processing step to adjust a thickness of a field oxide feature formed on the second workpiece toward at least a predetermined threshold value.In another aspect of the present invention, a computer-readable, program storage device is provided, encoded with instructions that, when executed by a computer, perform a method for manufacturing a workpiece, the method including processing a first workpiece in a nitride processing step and measuring a thickness of a field oxide feature formed on the first workpiece. The method also includes forming an output signal corresponding to the thickness of the field oxide feature. In addition, the method includes feeding back a control signal based on the output signal to adjust processing performed on a second workpiece in the nitride processing step to adjust a thickness of a field oxide feature formed on the second workpiece toward at least a predetermined threshold value.In yet another aspect of the present invention, a computer programmed to perform a method of manufacturing is provided, the method including processing a first workpiece in a nitride processing step and measuring a thickness of a field oxide feature formed on the first workpiece. The method also includes forming an output signal corresponding to the thickness of the field oxide feature. In addition, the method includes feeding back a control signal based on the output signal to adjust processing performed on a second workpiece in the nitride processing step to adjust a thickness of a field oxide feature formed on the second workpiece toward at least a predetermined threshold value.BRIEF DESCRIPTION OF THE DRAWINGSThe invention may be understood by reference to the following description taken in conjunction with the accompanying drawings, in which the leftmost significant digit(s) in the reference numerals denote(s) the first figure in which the respective reference numerals appear, and in which:FIGS. 1-26 illustrate schematically various embodiments of a method for manufacturing according to the present invention; and, more particularly:FIGS. 1-2, 5-12, 14-15 and 17 illustrate schematically a flow chart for various embodiments of a method for manufacturing according to the present invention;FIGS. 3-4 illustrate schematically various embodiments of field oxide (FOX) features used in various embodiments of a method for manufacturing according to the present invention; andFIGS. 13, 16 and 18-21 illustrate schematically various embodiments of displays used in various embodiments of a method for manufacturing according to the present invention;FIG. 22 schematically illustrates a method for fabricating a semiconductor device practiced in accordance with the present invention;FIG. 23 schematically illustrates workpieces being processed using a nitride strip processing tool, using a plurality of control input signals, in accordance with the present invention;FIGS. 24-25 schematically illustrate one particular embodiment of the process and tool in FIG. 23; andFIG. 26 schematically illustrates one particular embodiment of the method of FIG. 22 as may be practiced with the process and tool of FIGS. 24-25.While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and are herein described in detail. It should be understood, however, that the description herein of specific embodiments is not intended to limit the invention to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the appended claims.DETAILED DESCRIPTION OF SPECIFIC EMBODIMENTSIllustrative embodiments of the invention are described below. In the interest of clarity, not all features of an actual implementation are described in this specification. It will of course be appreciated that in the development of any such actual embodiment, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which will vary from one implementation to another. Moreover, it will be appreciated that such a development effort might be complex and time-consuming, but would nevertheless be a routine undertaking for those of ordinary skill in the art having the benefit of this disclosure.Illustrative embodiments of a method for manufacturing according to the present invention are shown in FIGS. 1-26. As shown in FIG. 1, a workpiece 100, such as a semiconducting substrate or wafer, having one or more process layers disposed thereon, is delivered to a nitride processing step 105. The nitride processing step 105 may include nitride stripping and/or nitride etching, for example. The nitride stripping and/or nitride etching may be a wet chemical process involving hot aqueous phosphoric acid (H3PO4), for example.As shown in FIG. 2, the workpiece 100 is sent from the nitride processing step 105 and delivered to a field oxide (FOX) thickness measuring step 110. In the measuring step 110, the FOX thickness of at least one feature on the workpiece 100 is measured by a metrology or measuring tool (not shown).As shown in FIG. 3, the workpiece 100 may have a FOX feature 300 disposed thereon. The FOX feature 300 may be used for electrical isolation of semiconductor devices such as transistors (not shown) subsequently formed on the workpiece 100. Alternatively, the FOX feature 300 may be formed on the workpiece 100 specifically as a test structure used to monitor the nitride stripping and/or nitride etching of the nitride processing step 105. A silicon nitride (Si3N4) layer 310 may be formed above the workpiece 100 and adjacent the FOX feature 300. The Si3N4 layer 310 may be removed by the nitride stripping and/or nitride etching of the nitride processing step 105. The Si3N4 layer 310 may have a thickness [tau] in a range from approximately 1500-2000 Ångstroms (Å) before the nitride processing step 105, and a thickness [tau] of about 0 Å subsequent to the nitride processing step 105. Similarly, the FOX feature 300 may have a thickness t in a range from approximately 4000-5500 Å before the nitride processing step 105, and a thickness t in a range from approximately 4000-5000 Å subsequent to the nitride processing step 105. Typically, about 0-500 Å of the FOX feature 300 may be etched away in the nitride processing step 105. In various illustrative embodiments of the present invention, a predetermined threshold thickness value of FOX features such as the FOX feature 300 may be in a range of approximately 4000-5000 Å, subsequent to the nitride processing step 105.As shown in FIG. 4A, the workpiece 100 may have several FOX features 400A, 405A and 410A, respectively, disposed thereon. One or more of the FOX features 400A, 405A and 410A may be used for electrical isolation of semiconductor devices such as transistors (not shown) subsequently formed on the workpiece 100. Alternatively, one or more of the FOX features 400A, 405A and 410A may be formed on the workpiece 100 specifically as a test structure used to monitor the nitride stripping and/or nitride etching of the nitride processing step 105. The FOX features 400A, 405A and 410A may have respective thicknesses t1, t2 and t3 in a range from approximately 4000-5000 Å subsequent to the nitride processing step 105.As shown in FIG. 2, in the FOX thickness measuring step 110, the metrology or measuring tool (not shown) may measure the thickness t of the FOX feature 300 (see FIG. 3) disposed on the workpiece 100, producing FOX data 115 indicative of the thickness t of the measured FOX feature 300. In alternative embodiments, the metrology or measuring tool (not shown) in the FOX thickness measuring step 110 may measure the respective thicknesses t1, t2 and t3 of more than one of the FOX features 400A, 405A and 410A (see FIG. 4A) disposed on the workpiece 100, producing FOX data 115 indicative of the median and/or mean thickness of the measured FOX features 400A, 405A and 410A. In one illustrative embodiment, a scanning electron microscope (SEM) is used to perform the FOX thickness measurements of the FOX features formed to have thickness t, producing sample thickness values t1, t2, . . . , tm, where m is the total number of the FOX features (similar to the FOX features 400A, 405A and 410A in FIG. 4A) that are measured by the SEM (e.g., m=3 in FIG. 3).As discussed above, as consecutive lots of workpieces (such as silicon wafers with various process layers formed thereon) are processed through a conventional nitride stripping and/or etching process step, increasing silicon (Si) concentration in the stripping and/or etching bath causes the FOX also to etch in varying amounts. For example, when hot aqueous hosphoric acid (H3PO4) is used to selectively etch silicon nitride (Si3N4), the Si3N4 etches way fairly steadily, at roughly ten times the initial etch rate of the FOX (SiO2). However, when the H3PO4 bath is fresh and the Si concentration is relatively low, the initial etch rate of the FOX (SiO2) is much faster than the later etch rate of the FOX (SiO2), as the H3PO4 bath ages and the Si concentration increases, in conventional nitride stripping. This causes the FOX thicknesses to increase with time, as the H3PO4 bath ages and the Si concentration increases, in conventional nitride stripping. In particular, the FOX thicknesses typically vary from run to run and/or batch to batch, in conventional nitride stripping, as shown schematically by comparing FIG. 4A with FIG. 4B.As shown in FIG. 4B, a conventional workpiece 420 may have several FOX features 400B, 405B and 410B, respectively, disposed thereon. The FOX features 400B, 405B and 410B may have respective thicknesses T1, T2 and T3 that are each larger than the respective thicknesses t1, t2 and t3 of the FOX features 400A, 405A and 410A disposed on the workpiece 100, as shown in FIG. 4A. The conventional workpiece 420 is shown in FIG. 4B as it would appear after conventional nitride stripping in an aged H3PO4 bath, having an increased Si concentration, relative to the H3PO4 bath in which the workpiece 100, as shown in FIG. 4A, had been processed. By way of contrast, any of the various illustrative embodiments of the present invention reduce such FOX thickness variations from run to run and/or batch to batch.As shown in FIG. 5, the FOX data 115 is sent from the FOX thickness measuring step 110 to an Advanced Process Control (APC) system monitor/controller 120. In the APC system monitor/controller 120, the FOX data 115 may be used to monitor and control the processing taking place in the nitride processing step 105.As shown in FIG. 6, a feedback control signal 125 may be sent from the APC system monitor/controller 120 to the nitride processing step 105, for example, depending on the FOX data 115 sent from the FOX thickness measuring step 110. The feedback control signal 125 may be used to adjust the processing performed in the nitride processing step 105 to adjust the thickness t of a FOX feature formed on a subsequent workpiece (not shown) processed in the nitride processing step 105 toward at least a predetermined threshold value. In one illustrative embodiment, the thickness t of the FOX feature formed on the subsequent workpiece (not shown) processed in the nitride processing step 105 may be in a range of approximately 4000-5000 Å. In various illustrative embodiments of the present invention, the predetermined threshold thickness value (of FOX features such as the FOX feature 300) may be in a range of approximately 4000-5000 Å.As shown in FIG. 7, one response to the feedback control signal 125 sent from the APC system monitor/controller 120 to the nitride processing step 105 may be to drain an old portion 130 of a chemical bath (not shown) used in the nitride processing step 105 into a waste outlet 135. One of the factors contributing to the variation of the thickness t of the FOX feature 300 formed on the workpiece 100 is the concentration of silicon (Si) in the chemical bath (not shown) used in the nitride processing step 105. By draining the old portion 130 of the chemical bath (not shown) into the waste outlet 135, the concentration of Si may be reduced in a case where the chemical bath is not well stirred so that the concentration of Si may be greater toward the bottom of the chemical bath, for example. In one illustrative embodiment, the concentration of silicon (Si) after the old portion 130 has been drained into the waste outlet 135 may be in a range of approximately 10-100 parts per billion (ppb).As shown in FIG. 8, another response to the feedback control signal 125 sent from the APC system monitor/controller 120 to the nitride processing step 105 may be to add new chemicals 140 to the chemical bath (not shown) used in the nitride processing step 105 from a new chemical supply 145. By adding the new chemicals 140 to the chemical bath (not shown) from the new chemical supply 145, the concentration of silicon (Si) may also be reduced, for example. In another illustrative embodiment, the concentration of Si after the new chemicals 140 have been added from the new chemical supply 145 may be in a range of approximately 10-100 parts per billion (ppb).As shown in FIG. 9, yet another response to the feedback control signal 125 sent from the APC system monitor/controller 120 to the nitride processing step 105 may be to add the new chemicals 140 to the chemical bath (not shown) from the new chemical supply 145, and, if the chemical bath is substantially fill, also to drain the old portion 130 of the chemical bath (not shown) used in the nitride processing step 105 into the waste outlet 135. By adding the new chemicals 140to the chemical bath (not shown) from the new chemical supply 145, and, if the chemical bath is substantially full, draining the old portion 130 of the chemical bath (not shown) into the waste outlet 135, the overall concentration of silicon (Si) may be reduced, for example. In yet another illustrative embodiment, the concentration of Si after the new chemicals 140 have been added from the new chemical supply 145 and, if the chemical bath is substantially full, after the old portion 130 has been drained into the waste outlet 135, may be in a range of approximately 10-100 parts per billion (ppb).The APC system monitor/controller 120 may be a preferred platform used in various illustrative embodiments of the present invention. In various illustrative embodiments, the APC system monitor/controller 120 may be part of a factory-wide software system. The APC system monitor/controller 120 also allows remote access and monitoring of the process performance. Furthermore, by utilizing the APC system monitor/controller 120, data storage can be more convenient, more flexible, and less expensive than local data storage on local drives, for example. The APC system monitor/controller 120 allows for more sophisticated types of control because it provides a significant amount of flexibility in writing the necessary software code.Deployment of the control strategies used in various illustrative embodiments of the present invention onto the APC system monitor/controller 120 may require a number of software components. In addition to components within the APC system monitor/controller 120, a computer script may be written for each of the semiconductor manufacturing tools involved in the control system. When a semiconductor manufacturing tool in the control system is started in the semiconductor manufacturing fab, the semiconductor manufacturing tool may call upon a script to initiate the action that is required by a nitride processing step controller (not shown). The control methods are generally defined and performed in these scripts. The development of these scripts may involve a substantial portion of the development of a control system. Various illustrative embodiments using an APC system for implementing nitride strip/etching processing are described below in conjunction with FIGS. 22-26.As shown in FIG. 10, the workpiece 100 is sent from the FOX thickness measuring step 110 to a residual FOX defect sensor 150. In the residual FOX defect sensor 150, residual FOX defects may be detected, generating a residual FOX defect count 155. The workpiece 100 may be sent from the residual FOX defect sensor 150 for further processing and/or handling.As shown in FIG. 11, the residual FOX defect count 155 is sent from the residual FOX defect sensor 150 to the APC system monitor/controller 120. In the APC system monitor/controller 120, the residual FOX defect count 155 may be used to monitor and control the processing taking place in the nitride processing step 105. In one illustrative embodiment, the APC system monitor/controller 120 may use the residual FOX defect count 155 to send a feedback control signal 125 to the nitride processing step 105. In another illustrative embodiment, the APC system monitor/controller 120 may use both the residual FOX defect count 155 (sent from the FOX defect sensor 150) and the FOX data 115 (sent from the FOX thickness measuring step 110) to send a feedback control signal 125 to the nitride processing step 105.As shown in FIG. 11, the response to the feedback control signal 125 sent from the APC system monitor/controller 120 to the nitride processing step 105 may be to add the new chemicals 140 to the chemical bath (not shown) from the new chemical supply 145, and, if the chemical bath is substantially full, also to drain the old portion 130 of the chemical bath (not shown) used in the nitride processing step 105 into the waste outlet 135. By adding the new chemicals 140 to the chemical bath (not shown) from the new chemical supply 145, and, if the chemical bath is substantially full, draining the old portion 130 of the chemical bath (not shown) into the waste outlet 135, the overall concentration of silicon (Si) may be reduced, for example. In one illustrative embodiment, the concentration of Si after the new chemicals 140 have been added from the new chemical supply 145 and, if the chemical bath is substantially full, after the old portion 130 has been drained into the waste outlet 135, may be in a range of approximately 10-100 parts per billion (ppb).As shown in FIG. 12, in addition to, and/or instead of, the feedback control signal 125, output data 160 may be sent from the APC system monitor/controller 120 to a FOX thickness threshold data display step 165. In the FOX thickness threshold data display step 165, the output signal 160 may be displayed, for example, by being presented in the form of a graph, as illustrated in FIG. 13, showing the FOX thickness (measured in angstroms, Å) on the workpiece 100 plotted as a function of time (measured in seconds). In one illustrative embodiment, the FOX thickness displayed is the thickness t of the FOX feature 300 formed on the workpiece 100. In another illustrative embodiment, the FOX thickness displayed is the median tmedian and/or average thickness taverage of the thickness values t1, t2, . . . , tm, where m is the total number of the FOX features (similar to the FOX features 400A, 405A and 410A in FIG. 4A) formed on the workpiece 100.As shown in FIG. 13, in one illustrative embodiment, the FOX thickness may be between the FOX underetch threshold 1300 (shown in dashed phantom) and the FOX overetch threshold 1305 (shown in dashed phantom) for a period of time. The FOX thickness may eventually cross the FOX overetch threshold 1305 (shown in dashed phantom) at the time 1310 (shown in dotted phantom).The display of the FOX thickness in the FOX thickness threshold data display step 165 may be used to alert an engineer of the need to adjust the processing performed in the nitride processing step 105 to reduce the overall concentration of nitride, for example. The engineer may also adjust, for example, the FOX underetch threshold 1300 (shown in dashed phantom) and the FOX overetch threshold 1305 (shown in dashed phantom).As shown in FIG. 14, a feedback control signal 170 may be sent from the FOX thickness threshold data display step 165 to the nitride processing step 105. As shown in FIG. 14, the response to the feedback control signal 170 sent from the FOX thickness threshold data display step 165 to the nitride processing step 105 may be to add the new chemicals 140 to the chemical bath (not shown) from the new chemical supply 145, and, if the chemical bath is substantially full, also to drain the old portion 130 of the chemical bath (not shown) used in the nitride processing step 105 into the waste outlet 135. By adding the new chemicals 140 to the chemical bath (not shown) from the new chemical supply 145, and, if the chemical bath is substantially full, draining the old portion 130 of the chemical bath (not shown) into the waste outlet 135, the overall concentration of silicon (Si) may be reduced, for example. In one illustrative embodiment, the concentration of Si after the new chemicals 140 have been added from the new chemical supply 145 and, if the chemical bath is substantially full, after the old portion 130 has been drained into the waste outlet 135, may be in a range of approximately 10-100 parts per billion (ppb).As shown in FIG. 15, in addition to, and/or instead of, the feedback control signal 170, defect counts 175 may be sent from the FOX thickness threshold data display step 165 to a defect count display step 180. In the defect count display step 180, the defect counts 175 may be displayed, for example, by being presented in the form of a histogram, as illustrated in FIG. 16, showing both the count number (defect counts 175) and the types of defects represented by the output signal 160. As shown in FIG. 16, in one illustrative embodiment, the number of residual FOX defects (shown shaded at 1600) is about 80, in the locations scanned, for the duration of the scan.The display of the number of residual FOX defects in the defect count display step 180 may be used to alert an engineer of the need to adjust the processing performed in the nitride processing step 105 to reduce the overall concentration of nitride, for example. The engineer may also alter and/or select, for example, the type of residual FOX defect whose defect counts 175 are to be displayed in the defect count display step 180.As shown in FIG. 16, a feedback control signal 185 may be sent from the defect count display step 180 to the nitride processing step 105. As shown in FIG. 16, the response to the feedback control signal 185 sent from the defect count display step 180 to the nitride processing step 105 may be to add the new chemicals 140 to the chemical bath (not shown) from the new chemical supply 145, and, if the chemical bath is substantially full, also to drain the old portion 130 of the chemical bath (not shown) used in the nitride processing step 105 into the waste outlet 135. By adding the new chemicals 140 to the chemical bath (not shown) from the new chemical supply 145, and, if the chemical bath is substantially full, draining the old portion 130 of the chemical bath (not shown) into the waste outlet 135, the overall concentration of silicon (Si) may be reduced, for example. In one illustrative embodiment, the concentration of Si after the new chemicals 140 have been added from the new chemical supply 145 and, if the chemical bath is substantially full, after the old portion 130 has been drained into the waste outlet 135, may be in a range of approximately 10-100 parts per billion (ppb).In one illustrative embodiment, in both the FOX thickness threshold data display step 165 and the defect count display step 180, and/or by using the APC system monitor/controller 120, the engineer may be provided with advanced process data monitoring capabilities, such as the ability to provide historical parametric data in a user-friendly format, as well as event logging, real-time graphical display of both current processing parameters and the processing parameters of the entire run, and remote, i.e., local site and worldwide, monitoring. These capabilities may engender more optimal control of critical processing parameters, such as throughput accuracy, stability and repeatability, processing temperatures, mechanical tool parameters, and the like. This more optimal control of critical processing parameters reduces this variability. This reduction in variability manifests itself as fewer within-run disparities, fewer run-to-run disparities and fewer tool-to-tool disparities. This reduction in the number of these disparities that can propagate means fewer deviations in product quality and performance. In such an illustrative embodiment of a method of manufacturing according to the present invention, a monitoring and diagnostics system may be provided that monitors this variability and optimizes control of critical parameters.As shown in FIG. 18, in various illustrative embodiments, the FOX thickness may be the median tmedian of the thickness values t1i, t2, . . . , tm, where m is the total number of the FOX features (similar to the FOX features 400A, 405A and 410A in FIG. 4A) formed on the workpiece 100. For example, as shown in FIG. 18, using a Tukey "box and whiskers" plot, the FOX thickness measurements performed on the FOX features 400A, 405A and 410A (see FIG. 4A) formed on the workpiece 100 may have a median value 1800 of approximately 800 Å. The median value 1800 of the FOX thickness measurements is the sample value at the midpoint of the FOX thickness measurements, so that half of the FOX thickness measurement values are less than or equal to the median value 1800 and half of the FOX thickness measurement values are greater than or equal to the median value 1800.As shown in FIG. 18, Tukey box and whiskers plots may be used to compare the FOX thickness measurement values taken using the FOX features 400A, 405A and 410A (see FIG. 4A) formed on the workpiece 100 with FOX thickness measurement values taken using FOX features (not shown) formed on a workpiece 1810 (not shown), similar to the FOX features 400A, 405A and 410A (see FIG. 4A) formed on the workpiece 100, for example. The median value 1815 is approximately 800 Å for the FOX thickness measurements of the FOX features (not shown) formed on the workpiece 1810 (not shown), similar to the FOX features 400A, 405A and 410A (see FIG. 4A) formed on the workpiece 100.Alternatively, the FOX thickness may be the average thickness taverage of the thickness values t1, t2, . . . , tm, where m is the total number of the FOX features (similar to the FOX features 400A, 405A and 410A in FIG. 4A) formed on the workpiece 100. As shown in FIG. 19, using a Student's t-distribution plot 1900, the FOX thickness measurement values taken using the FOX features 400A, 405A and 410A (see FIG. 4A) formed on the workpiece 100 may have a sample mean value 1905 of approximately 800 Å. The sample mean value 1905 of the FOX thickness measurements taken using FOX features 400A, 405A and 410A (see FIG. 4A) is the sample average of the FOX thickness measurements over all m of the features 400A, 405A and 410A that are measured, where xi is the FOX thickness measurement of the ith of the features 400A, 405A and 410A. Note that the number m of the features 400A, 405A and 410A that are measured may be less than or equal to the total number M of the features 400A, 405A and 410A on the workpiece 100.As shown in FIG. 19, Student's t-distribution plots 1900 and 1910 may be used to compare the sample mean value 1905 of the FOX thickness measurements (performed on the workpiece 100) with the sample mean value 1905 of the FOX thickness measurements performed on the workpiece 1810 (not shown), for example. The sample mean value 1905 of approximately 800 Å of the FOX thickness measurements performed on the workpiece 1810 (not shown) is the sample average of the FOX thickness measurements over all n of the features (not shown) that are measured on the workpiece 1810 (not shown), where yj is the FOX thickness measurement of the jth of the features (not shown) that are measured on the workpiece 1810 (not shown). Note that the number n of the features (not shown) that are measured on the workpiece 1810 (not shown) may be less than or equal to the total number T of the features (not shown) that are measured on the workpiece 1810 (not shown).As shown in FIG. 19, the Student's t-distribution plots 1900 and 1910 may approach the Gaussian normal z-distribution plot 1915 as the number of features n and m becomes very large, for m>n>>about 25. The Gaussian normal z-distribution plot 1915 has the mean value 1905 ([mu]) given by the expressions in the limit m>n>>about 25, where xi (the FOX thickness measurement of the ith of the features on the workpiece 100) and yj (the FOX thickness measurement of the jth of the features on the workpiece 1810) are treated as independent random variables with means <xi≥[mu]=<yj> for 1≤i≤m and 1≤j≤n, and where the mean value 1905 ([mu]) may also be approximately 800 Å.As shown in FIG. 20, using a Tukey "box and whiskers" plot, the FOX thickness measurements performed on the workpiece 100 may have the median value 1800 (see FIG. 18) contained within an interquartile range (IQR) box 2005 bounded by first and third quartile values 2010 and 2015, respectively. Whiskers 2020 and 2025 may not extend beyond one and a half times the difference between the third and first quartiles 2015 and 2010 (1.5*IQR).The first quartile value 2010 is the median value of the FOX thickness measurements that are less than or equal to the median value 1800. The third quartile value 2015 is the median value of the FOX thickness measurements that are greater than or equal to the median value 1800. The IQR is the difference between the third and first quartiles 2015 and 2010. Any FOX thickness measurement values beyond the whiskers 2020 and 2025 are "outliers" and may not always be depicted in a Tukey box and whiskers plot.As shown in FIG. 20, Tukey box and whiskers plots may be used to compare the FOX thickness measurement values taken on the workpiece 100 with the FOX thickness measurement values taken on the workpiece 1810 (not shown), for example. The FOX thickness measurements performed on the workpiece 1810 may have the median value 1815 (see FIG. 18) contained within an IQR box 2035 bounded by first and third quartile values 2040 and 2045, respectively. Whiskers 2050 and 2055 may not extend beyond one and a half times the difference between the third and first quartiles 2045 and 2040 (1.5*IQR).The first quartile value 2040 is the median value of the FOX thickness measurements that are less than or equal to the median value 1815. The third quartile value 2045 is the median value of the FOX thickness measurements that are greater than or equal to the median value 1815. The IQR is the difference between the third and first quartile values 2045 and 2040. Any FOX thickness measurement values beyond the whiskers 2050 and 2055 are "outliers" and may not always be depicted in a Tukey box and whiskers plot.Alternatively, as shown in FIG. 21, using the Student's t-distribution plot 1900, the FOX thickness measurements performed on the workpiece 100 may have the sample mean value 1905 and a sample standard error 2100, bounded by the sample mean value 1905 and a first standard error line 2105. The sample standard error 2100 is where the sample standard deviation is for the FOX thickness measurements taken over all m of the features that are measured on the workpiece 100, where xi is the FOX thickness measurement of the ith FOX feature. Note that the number m of the FOX features that are measured may be less than or equal to the total number M of the FOX features on the workpiece 100. The sample standard error 2100 for the FOX thickness measurements decreases as the number m (the number of the FOX features on the workpiece 100 that are measured) increases.As shown in FIG. 21, Student's t-distribution plots 1900 and 1910 may be used to compare the FOX thickness measurement values taken on the workpiece 100 with the FOX thickness measurement values taken on the workpiece 1810 (not shown), for example. The FOX thickness measurements performed on the workpiece 1810 may have the sample mean value 1905 and a sample standard error 2110, bounded by the sample mean value 1905 and a first standard error line 2115. The sample standard error 2110 is for the FOX thickness measurements taken over all n of the FOX features that are measured on the workpiece 1810, where yj is the FOX thickness measurement of the jth FOX feature. Note that the number n of the FOX features that are measured on the workpiece 1810 may be less than or equal to the total number T of the FOX features on the workpiece 1810. The sample standard error 2110 for the FOX thickness measurements decreases as the number n (the number of the FOX features 820 on the workpiece 1810 that are measured) increases.As shown in FIG. 21, the Student's t-distribution plots 1900 and 1910 may approach the Gaussian normal z-distribution plot 1915 as the number of features n and m becomes very large, for m>n>>about 25. The Gaussian normal z-distribution plot 1915 has a standard deviation 2120 ([sigma]/m), bounded by the mean value 1905 ([mu]) and a first standard deviation line 2125. The Gaussian normal standard deviation 2120 ([sigma]/m) is given by the expression that is substantially equivalent to the normal standard deviation [sigma]/(n) given by in the limit m>n>>about 25, where xi (the FOX thickness measurement of the ith FOX feature on the workpiece 100) and yj (the FOX thickness measurement of the jth FOX feature on the workpiece 1810) are treated as independent random variables with means <xi≥[mu]=<yj> and with variances <(xi-[mu])<2>≥[sigma]<2≥<(yj-[mu])<2>> for 1≤i≤m and 1≤j≤n, and where the standard deviation 2120 ([sigma]/m) may be approximately 150 Å and may also be substantially equivalent to the normal standard deviation [sigma]/(n). Note that the independence means that FIG. 22 illustrates one particular embodiment of a method 2200 practiced in accordance with the present invention. FIG. 23 illustrates one particular apparatus 2300 with which the method 2200 may be practiced. For the sake of clarity, and to further an understanding of the invention, the method 2200 shall be disclosed in the context of the apparatus 2300. However, the invention is not so limited and admits wide variation, as is discussed further below.Referring now to both FIGS. 22 and 23, a batch or lot of workpieces or wafers 2305 is being processed through a nitride strip processing tool 2310. The nitride strip processing tool 2310 may be any nitride strip processing tool known to the art, provided it includes the requisite control capabilities. The nitride strip processing tool 2310 includes a nitride strip processing tool controller 2315 for this purpose. The nature and function of the nitride strip processing tool controller 2315 will be implementation specific. For instance, the nitride strip processing tool controller 2315 may control nitride strip control input parameters such as nitride stripping bath control input parameters. The nitride stripping bath control input parameters may include nitride strip control input parameters for adding hot aqueous phosphoric acid (H3PO4) to the bath used to selectively etch the silicon nitride (Si3N4), draining the bath, stirring the bath, and the like. Four workpieces 2305 are shown in FIG. 23, but the lot of workpieces or wafers, i.e., the "wafer lot," may be any practicable number of wafers from one to any finite number.The method 2200 begins, as set forth in box 2220, by measuring a parameter characteristic of the nitride strip processing performed on the workpiece 2305 in the nitride strip processing tool 2310. The nature, identity, and measurement of characteristic parameters will be largely implementation specific and even tool specific. For instance, capabilities for monitoring process parameters vary, to some degree, from tool to tool. Greater sensing capabilities may permit wider latitude in the characteristic parameters that are identified and measured and the manner in which this is done. Conversely, lesser sensing capabilities may restrict this latitude. For example, a metrology tool (not shown) may the FOX thickness of a workpiece 2305, and/or an average of the FOX thicknesses of the workpieces 2305 in a lot, and the metrology tool may need to be calibrated, but this calibration may vary from wafer to wafer. The metrology tool by itself typically does not feed back the FOX thickness information to the nitride strip tool. The FOX thickness of a workpiece 2305, and/or an average of the FOX thicknesses of the workpieces 2305 in a lot, is an illustrative example of a parameter characteristic of the nitride strip processing performed on the workpiece in the nitride strip processing tool 2310. Another illustrative example of a parameter characteristic of the nitride strip processing performed on the workpiece in the nitride strip processing tool 2310 is the residual FOX defect count 155 detected by a residual FOX defect sensor 150, as discussed above in the description of FIGS. 10-11.Turning to FIG. 23, in this particular embodiment, the nitride strip process characteristic parameters are measured and/or monitored by tool sensors (not shown). The outputs of these tool sensors are transmitted to a computer system 2330 over a line 2320. The computer system 2330 analyzes these sensor outputs to identify the characteristic parameters.Returning, to FIG. 22, once the characteristic parameter is identified and measured, the method 2200 proceeds by modeling the measured and identified characteristic parameter, as set forth in box 2230. The computer system 2330 in FIG. 23 is, in this particular embodiment, programmed to model the characteristic parameter. The manner in which this modeling occurs will be implementation specific.In the embodiment of FIG. 23, a database 2335 stores a plurality of models that might potentially be applied, depending upon which characteristic parameter is identified. This particular embodiment, therefore, requires some a priori knowledge of the characteristic parameters that might be measured. The computer system 2330 then extracts an appropriate model from the database 2335 of potential models to apply to the identified characteristic parameters. If the database 2335 does not include an appropriate model, then the characteristic parameter may be ignored, or the computer system 2330 may attempt to develop one, if so programmed. The database 2335 may be stored on any kind of computer-readable, program storage medium, such as an optical disk 2340, a floppy disk 2345, or a hard disk drive (not shown) of the computer system 2330. The database 2335 may also be stored on a separate computer system (not shown) that interfaces with the computer system 2330.Modeling of the identified characteristic parameter may be implemented differently in alternative embodiments. For instance, the computer system 2330 may be programmed using some form of artificial intelligence to analyze the sensor outputs and controller inputs to develop a model on-the-fly in a real-time implementation. This approach might be a useful adjunct to the embodiment illustrated in FIG. 23, and discussed above, where characteristic parameters are measured and identified for which the database 2335 has no appropriate model.The method 2200 of FIG. 22 then proceeds by applying the model to modify a nitride strip control input parameter, as set forth in box 2240. Depending on the implementation, applying the model may yield either a new value for the nitride strip control input parameter or a correction to the existing nitride strip control input parameter. The new nitride strip control input is then formulated from the value yielded by the model and is transmitted to the nitride strip processing tool controller 2315 over the line 2320. The nitride strip processing tool controller 2315 then controls subsequent nitride strip process operations in accordance with the new nitride strip control inputs.Some alternative embodiments may employ a form of feedback to improve the modeling of characteristic parameters. The implementation of this feedback is dependent on several disparate facts, including the tool's sensing capabilities and economics. One technique for doing this would be to monitor at least one effect of the model's implementation and update the model based on the effect(s) monitored. The update may also depend on the model. For instance, a linear model may require a different update than would a non-linear model, all other factors being the same.As is evident from the discussion above, some features of the present invention are implemented in software. For instance, the acts set forth in the boxes 2220-2240 in FIG. 22 are, in the illustrated embodiment, software-implemented, in whole or in part. Thus, some features of the present invention are implemented as instructions encoded on a computer-readable, program storage medium. The program storage medium may be of any type suitable to the particular implementation. However, the program storage medium will typically be magnetic, such as the floppy disk 2345 or the computer 2330 hard disk drive (not shown), or optical, such as the optical disk 2340. When these instructions are executed by a computer, they perform the disclosed functions. The computer may be a desktop computer, such as the computer 2330. However, the computer might alternatively be a processor embedded in the nitride strip processing tool 2310. The computer might also be a laptop, a workstation, or a mainframe in various other embodiments. The scope of the invention is not limited by the type or nature of the program storage medium or computer with which embodiments of the invention might be implemented.Thus, some portions of the detailed descriptions herein are, or may be, presented in terms of algorithms, functions, techniques, and/or processes. These terms enable those skilled in the art most effectively to convey the substance of their work to others skilled in the art. These terms are here, and are generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electromagnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated.It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, and the like. All of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities and actions. Unless specifically stated otherwise, or as may be apparent from the discussion, terms such as "processing," "computing," "calculating," "determining," "displaying," and the like, used herein refer to the action(s) and processes of a computer system, or similar electronic and/or mechanical computing device, that manipulates and transforms data, represented as physical (electromagnetic) quantities within the computer system's registers and/or memories, into other data similarly represented as physical quantities within the computer system's memories and/or registers and/or other such information storage, transmission and/or display devices.Construction of an Illustrative Apparatus. An exemplary embodiment 2400 of the apparatus 2300 in FIG. 23 is illustrated in FIGS. 24-25, in which the apparatus 2400 comprises a portion of an Advanced Process Control ("APC") system. FIGS. 24-25 are conceptualized, structural and functional block diagrams, respectively, of the apparatus 2400. A set of processing steps is performed on a lot of wafers 2405 on a nitride strip processing tool 2410. Because the apparatus 2400 is part of an APC system, the wafers 2405 are processed on a run-to-run basis. Thus, process adjustments are made and held constant for the duration of a run, based on run-level measurements or averages. A "run" may be a lot, a batch of lots, or even an individual wafer.In this particular embodiment, the wafers 2405 are processed by the nitride strip processing tool 2410 and various operations in the process are controlled by a plurality of nitride strip control input signals on a line 2420 between the nitride strip processing tool 2410 and a workstation 2430. Exemplary nitride strip control inputs for this embodiment might include nitride stripping bath control input parameters for adding hot aqueous phosphoric acid (H3PO4) to the bath used to selectively etch the silicon nitride (Si3N4), draining the bath, stirring the bath, and the like.When a process step in the nitride strip processing tool 2410 is concluded, the semiconductor wafers 2405 being processed in the nitride strip processing tool 2410 is examined in a review station 2417. The nitride strip control inputs generally affect the FOX thickness of the semiconductor wafers 2405 and, hence, the variability and properties of the dielectric film etched/deposited by the nitride strip processing tool 2410 on the wafers 2405. Once errors are determined from the examination after the run of a lot of wafers 2405, the nitride strip control inputs on the line 2420 are modified for a subsequent run of a lot of wafers 2405. Modifying the control signals on the line 2420 is designed to improve the next process step in the nitride strip processing tool 2410. The modification is performed in accordance with one particular embodiment of the method 2200 set forth in FIG. 22, as described more fully below. Once the relevant nitride strip control input signals for the nitride strip processing tool 2410 are updated, the nitride strip control input signals with new settings are used for a subsequent run of semiconductor devices.Referring now to both FIGS. 24 and 25, the nitride strip processing tool 2410 communicates with a manufacturing framework comprising a network of processing modules. One such module is an APC system manager 2540 resident on the computer 2440. This network of processing modules constitutes the APC system. The nitride strip processing tool 2410 generally includes an equipment interface 2510 and a sensor interface 2515. A machine interface 2530 resides on the workstation 2430. The machine interface 2530 bridges the gap between the APC framework, e.g., the APC system manager 2540, and the equipment interface 2510. Thus, the machine interface 2530 interfaces the nitride strip processing tool 2410 with the APC framework and supports machine setup, activation, monitoring, and data collection. The sensor interface 2515 provides the appropriate interface environment to communicate with external sensors such as LabView(R) or other sensor bus-based data acquisition software. Both the machine interface 2530 and the sensor interface 2515 use a set of functionalities (such as a communication standard) to collect data to be used. The equipment interface 2510 and the sensor interface 2515 communicate over the line 2420 with the machine interface 2530 resident on the workstation 2430.More particularly, the machine interface 2530 receives commands, status events, and collected data from the equipment interface 2510 and forwards these as needed to other APC components and event channels. In turn, responses from APC components are received by the machine interface 2530 and rerouted to the equipment interface 2510. The machine interface 2530 also reformats and restructures messages and data as necessary. The machine interface 2530 supports the startup/shutdown procedures within the APC System Manager 2540. It also serves as an APC data collector, buffering data collected by the equipment interface 2510, and emitting appropriate data collection signals.In the particular embodiment illustrated, the APC system is a factory-wide software system, but this is not necessary to the practice of the invention. The control strategies taught by the present invention can be applied to virtually any semiconductor nitride strip processing tool on a factory floor. Indeed, the present invention may be simultaneously employed on multiple nitride strip processing tools in the same factory or in the same fabrication process. The APC framework permits remote access and monitoring of the process performance. Furthermore, by utilizing the APC framework, data storage can be more convenient, more flexible, and less expensive than data storage on local drives. However, the present invention may be employed, in some alternative embodiments, on local drives.The illustrated embodiment deploys the present invention onto the APC framework utilizing a number of software components. In addition to components within the APC framework, a computer script is written for each of the semiconductor nitride strip processing tools involved in the control system. When a semiconductor nitride strip processing tool in the control system is started in the semiconductor manufacturing fab, the semiconductor nitride strip processing tool generally calls upon a script to initiate the action that is required by the nitride strip processing tool controller. The control methods are generally defined and performed using these scripts. The development of these scripts can comprise a significant portion of the development of a control system.In this particular embodiment, there are several separate software scripts that perform the tasks involved in controlling the nitride strip processing operation. There is one script for the nitride strip processing tool 2410, including the review station 2417 and the nitride strip processing tool controller 2415. There is also a script to handle the actual data capture from the review station 2417 and another script that contains common procedures that can be referenced by any of the other scripts. There is also a script for the APC system manager 2540. The precise number of scripts, however, is implementation specific and alternative embodiments may use other numbers of scripts.Operation of an Illustrative Apparatus. FIG. 26 illustrates one particular embodiment 2600 of the method 2200 in FIG. 22. The method 2600 may be practiced with the apparatus 2400 illustrated in FIGS. 24-25, but the invention is not so limited. The method 2600 may be practiced with any apparatus that may perform the functions set forth in FIG. 26. Furthermore, the method 2200 in FIG. 22 may be practiced in embodiments alternative to the method 2600 in FIG. 26.Referring now to all of FIGS. 24-26, the method 2600 begins with processing a lot of wafers 2405 through a nitride strip processing tool 2410, as set forth in box 2610. In this particular embodiment, the nitride strip processing tool 2410 has been initialized for processing by the APC system manager 2540 through the machine interface 2530 and the equipment interface 2510. In this particular embodiment, before the nitride strip processing tool 2410 is run, the APC system manager script is called to initialize the nitride strip processing tool 2410. At this step, the script records the identification number of the nitride strip processing tool 2410 and the lot number of the wafers 2405. The identification number is then stored against the lot number in a data store 2460. The rest of the script, such as the APCData call and the Setup and StartMachine calls, are formulated with blank or dummy data in order to force the machine to use default settings.As part of this initialization, the initial setpoints for nitride strip control are provided to the nitride strip processing tool controller 2415 over the line 2420. These initial setpoints may be determined and implemented in any suitable manner known to the art. In the particular embodiment illustrated, nitride strip controls are implemented by control threads. Each control thread acts like a separate controller and is differentiated by various process conditions. For nitride strip control, the control threads are separated by a combination of different conditions. These conditions may include, for example, the semiconductor nitride strip processing tool 2410 currently processing the wafer lot, the semiconductor product, the semiconductor manufacturing operation, and one or more of the semiconductor processing tools (not shown) that previously processed the semiconductor wafer lot.Control threads are separated because different process conditions affect nitride strip error differently. By isolating each of the process conditions into its own corresponding control thread, the nitride strip error can become a more accurate portrayal of the conditions in which a subsequent semiconductor wafer lot in the control thread will be processed. Since the error measurement is more relevant, changes to the nitride strip control input signals based upon the error will be more appropriate.The control thread for the nitride strip control scheme depends upon the current nitride strip processing tool, current operation, the product code for the current lot, and the identification number at a previous processing step. The first three parameters are generally found in the context information that is passed to the script from the nitride strip processing tool 2410. The fourth parameter is generally stored when the lot is previously processed. Once all four parameters are defined, they are combined to form the control thread name; NITR02_OPER01_PROD01_NITR01 is an example of a control thread name. The control thread name is also stored in correspondence to the wafer lot number in the data store 2460.Once the lot is associated with a control thread name, the initial settings for that control thread are generally retrieved from the data store 2460. There are at least two possibilities when the call is made for the information. One possibility is that there are no settings stored under the current control thread name. This can happen when the control thread is new, or if the information was lost or deleted. In these cases, the script initializes the control thread assuming that there is no error associated with it and uses the target values of the nitride strip errors as the nitride strip control input settings. It is preferred that the controllers use the default machine settings as the initial settings. By assuming some settings, the nitride strip errors can be related back to the control settings in order to facilitate feedback control.Another possibility is that the initial settings are stored under the control thread name. In this case, one or more wafer lots have been processed under the same control thread name as the current wafer lot, and have also been measured for nitride strip error using the review station 2417. When this information exists, the nitride strip control input signal settings are retrieved from the data store 2460. These settings are then downloaded to the nitride strip processing tool 2410.The wafers 2405 are processed through the nitride strip processing tool 2410. This includes, in the embodiment illustrated, dielectric film or layer etch and/or deposition and/or etch/deposition. The wafers 2405 are measured on the review station2417 after their nitride strip processing on the nitride strip processing tool 2410. The review station 2417 examines the wafers 2405 after they are processed for a number of errors. The data generated by the instruments of the review station 2417 is passed to the machine interface 2530 via sensor interface 2515 and the line 2420. The review station script begins with a number of APC commands for the collection of data. The review station script then locks itself in place and activates a data available script. This script facilitates the actual transfer of the data from the review station 2417 to the APC framework. Once the transfer is completed, the script exits and unlocks the review station script. The interaction with the review station 2417 is then generally complete.As will be appreciated by those skilled in the art having the benefit of this disclosure, the data generated by the review station 2417 should be preprocessed for use. Review stations, such as KLA review stations, provide the control algorithms for measuring the control error. Each of the error measurements, in this particular embodiment, corresponds to one of the nitride strip control input signals on the line 2420 in a direct manner. Before the error can be utilized to correct the nitride strip control input signal, a certain amount of preprocessing is generally completed.For example, preprocessing may include outlier rejection. Outlier rejection is a gross error check ensuring that the received data is reasonable in light of the historical performance of the process. This procedure involves comparing each of the nitride strip errors to its corresponding predetermined boundary parameter. In one embodiment, even if one of the predetermined boundaries is exceeded, the error data from the entire semiconductor wafer lot is generally rejected.To determine the limits of the outlier rejection, thousands of actual semiconductor manufacturing fabrication ("fab") data points are collected. The standard deviation for each error parameter in this collection of data is then calculated. In one embodiment, for outlier rejection, nine times the standard deviation (both positive and negative) is generally chosen as the predetermined boundary. This was done primarily to ensure that only the points that are significantly outside the normal operating conditions of the process are rejected.Preprocessing may also smooth the data, which is also known as filtering. Filtering is important because the error measurements are subject to a certain amount of randomness, such that the error significantly deviates in value. Filtering the review station data results in a more accurate assessment of the error in the nitride strip control input signal settings. In one embodiment, the nitride strip control scheme utilizes a filtering procedure known as an Exponentially-Weighted Moving Average ("EWMA") filter, although other filtering procedures can be utilized in this context.One embodiment for the EWMA filter is represented by Equation (1):AVGN=W*MC+(1-W)*AVGP (1)whereAVGN[identical to]the new EWMA average;W[identical to]a weight for the new average (AVGN);MC[identical to]the current measurement; andAVGP[identical to]the previous EWMA average.The weight is an adjustable parameter that can be used to control the amount of filtering and is generally between zero and one. The weight represents the confidence in the accuracy of the current data point. If the measurement is considered accurate, the weight should be close to one. If there were a significant amount of fluctuations in the process, then a number closer to zero would be appropriate.In one embodiment, there are at least two techniques for utilizing the EWMA filtering process. The first technique uses the previous average, the weight, and the current measurement as described above. Among the advantages of utilizing the first implementation are ease of use and minimal data storage. One of the disadvantages of utilizing the first implementation is that this method generally does not retain much process information. Furthermore, the previous average calculated in this manner would be made up of every data point that preceded it, which may be undesirable. The second technique retains only some of the data and calculates the average from the raw data each time.The manufacturing environment in the semiconductor manufacturing fab presents some unique challenges. The order that the semiconductor wafer lots are processed through a nitride strip processing tool may not correspond to the order in which they are read on the review station. This could lead to the data points being added to the EWMA average out of sequence. Semiconductor wafer lots may be analyzed more than once to verify the error measurements. With no data retention, both readings would contribute to the EWMA average, which may be an undesirable characteristic. Furthermore, some of the control threads may have low volume, which may cause the previous average to be outdated such that it may not be able to accurately represent the error in the nitride strip control input signal settings.The nitride strip processing tool controller 2415, in this particular embodiment, uses limited storage of data to calculate the EWMA filtered error, i.e., the first technique. Wafer lot data, including the lot number, the time the lot was processed, and the multiple error estimates, are stored in the data store 2460 under the control thread name. When a new set of data is collected, the stack of data is retrieved from data store 2460 and analyzed. The lot number of the current lot being processed is compared to those in the stack. If the lot number matches any of the data present there, the error measurements are replaced. Otherwise, the data point is added to the current stack in chronological order, according to the time periods when the lots were processed. In one embodiment, any data point within the stack that is over 258 hours old is removed. Once the aforementioned steps are complete, the new filter average is calculated and stored to data store 2460.Thus, the data is collected and preprocessed, and then processed to generate an estimate of the current errors in the nitride strip control input signal settings. First, the data is passed to a compiled Matlab(R) plug-in that performs the outlier rejection criteria described above. The inputs to a plug-in interface are the multiple error measurements and an array containing boundary values. The return from the plug-in interface is a single toggle variable. A nonzero return denotes that it has failed the rejection criteria, otherwise the variable returns the default value of zero and the script continues to process.After the outlier rejection is completed, the data is passed to the EWMA filtering procedure. The controller data for the control thread name associated with the lot is retrieved, and all of the relevant operation upon the stack of lot data is carried out. This includes replacing redundant data or removing older data. Once the data stack is adequately prepared, it is parsed into ascending time-ordered arrays that correspond to the error values. These arrays are fed into the EWMA plug-in along with an array of the parameter required for its execution. In one embodiment, the return from the plug-in is comprised of the six filtered error values.Returning to FIG. 26, data preprocessing includes measuring a characteristic parameter in a nitride strip operation, such as workpiece 2405 FOX thickness, arising from nitride strip processing control of the nitride strip processing tool 2410, as set forth in box 2620. Known, potential characteristic parameters may be identified by characteristic data patterns or may be identified as known consequences of modifications to control input parameters. The example of how changes in silicon (Si) concentration in the nitride stripping bath affect FOX thickness variability given above falls into this latter category.The next step in the control process is to calculate the new settings for the nitride strip processing tool controller 2415 of the nitride strip processing tool 2410. The previous settings for the control thread corresponding to the current wafer lot are retrieved from the data store 2460. This data is paired along with the current set of nitride strip errors. The new settings are calculated by calling a compiled Matlab(R) plug-in. This application incorporates a number of inputs, performs calculations in a separate execution component, and returns a number of outputs to the main script. Generally, the inputs of the Matlab(R) plug-in are the nitride strip control input signal settings, the review station errors, an array of parameters that are necessary for the control algorithm, and a currently unused flag error. The outputs of the Matlab(R) plug-in are the new controller settings, calculated in the plug-in according to the controller algorithm described above.A nitride strip process engineer or a control engineer, who generally determines the actual form and extent of the control action, can set the parameters. They include the threshold values, maximum step sizes, controller weights, and target values. Once the new parameter settings are calculated, the script stores the setting in the data store 2460 such that the nitride strip processing tool 2410 can retrieve them for the next wafer lot to be processed. The principles taught by the present invention can be implemented into other types of manufacturing frameworks.Returning again to FIG. 26, the calculation of new settings includes, as set forth in box 2630, modeling the identified characteristic parameter. This modeling may be performed by the Matlab(R) plug-in. In this particular embodiment, only known, potential characteristic parameters are modeled and the models are stored in a database 2435 accessed by a machine interface 2530. The database 2435 may reside on the workstation 2430, as shown, or some other part of the APC framework. For instance, the models might be stored in the data store 2460 managed by the APC system manager 2540 in alternative embodiments. The model will generally be a mathematical model, i e., an equation describing how the change(s) in nitride stripping bath control(s) affects the nitride strip performance and the FOX thickness variability from wafer to wafer and/or from run to run, and the like.The particular model used will be implementation specific, depending upon the particular nitride strip processing tool 2410 and the particular characteristic parameter being modeled. Whether the relationship in the model is linear or non-linear will be dependent on the particular parameters involved.The new settings are then transmitted to and applied by the nitride strip processing tool controller 2415. Thus, returning now to FIG. 26, once the identified characteristic parameter is modeled, the model is applied to modify at least one nitride stripping bath control input parameter, as set forth in box 2640. In this particular embodiment, the machine interface 2530 retrieves the model from the database 2435, plugs in the respective value(s), and determines the necessary change(s) in the nitride stripping bath control input parameter(s). The change is then communicated by the machine interface 2530 to the equipment interface 2510 over the line 2420. The equipment interface 2510 then implements the change.The present embodiment furthermore provides that the models be updated. This includes, as set forth in boxes 2650-2660 of FIG. 26, monitoring at least one effect of modifying the nitride stripping bath control input parameters (box 2650) and updating the applied model (box 2660) based on the effect(s) monitored. For instance, various aspects of the nitride strip processing tool 2410's operation will change as the hot aqueous phosphoric acid (H3PO4) bath, used to selectively etch silicon nitride (Si3N4) in the nitride strip processing tool 2410, ages. By monitoring the effect of the nitride stripping bath change(s) implemented as a result of the characteristic parameter (e.g., workpiece 2405 FOX thickness and/or residual FOX defect count 155) measurement, the necessary value could be updated to yield superior performance.As noted above, this particular embodiment implements an APC system. Thus, changes are implemented "between" lots. The actions set forth in the boxes 2620-2660 are implemented after the current lot is processed and before the second lot is processed, as set forth in box 2670 of FIG. 26. However, the invention is not so limited. Furthermore, as noted above, a lot may constitute any practicable number of wafers from one to several thousand (or practically any finite number). What constitutes a "lot" is implementation specific, and so the point of the fabrication process in which the updates occur will vary from implementation to implementation.Any of the above-disclosed embodiments of a method of manufacturing according to the present invention enables the use of central values and spreads of FOX thickness measurements sent from a measuring tool to make run-to-run processing tool adjustments, either manually and/or automatically, to improve and/or better control the yield. Additionally, any of the above-disclosed embodiments of a method of manufacturing according to the present invention enables semiconductor device fabrication with increased device density and precision, increased efficiency and increased signal-to-noise ratio for the metrology tools, enabling a streamlined and simplified process flow, thereby decreasing the complexity and lowering the costs of the manufacturing process and increasing throughput.Any of the above-disclosed embodiments of a method of manufacturing according to the present invention enables the monitoring and control of the FOX thickness following a nitride stripping and/or etching process step. As consecutive lots of workpieces (such as silicon wafers with various process layers formed thereon) are processed through a nitride stripping and/or etching process step, any of the above-disclosed embodiments of a method of manufacturing according to the present invention enables the monitoring and control of the silicon concentration in the stripping and/or etching bath, decreasing the FOX thickness variations. In particular, the FOX thickness will be more uniform from run to run and/or batch to batch, leading to a decreased number of residual FOX defects, further raising the workpiece throughput and further decreasing the workpiece manufacturing costs.The particular embodiments disclosed above are illustrative only, as the invention may be modified and practiced in different but equivalent manners apparent to those skilled in the art having the benefit of the teachings herein. Furthermore, no limitations are intended to the details of construction or design herein shown, other than as described in the claims below. It is therefore evident that the particular embodiments disclosed above may be altered or modified and all such variations are considered within the scope and spirit of the invention. Accordingly, the protection sought herein is as set forth in the claims below. |
The invention relates to a method and apparatus for stopping resin bleed and mold flash on integrated circuit lead finishes. A method and apparatus of minimizing resin bleed and mold flash on integrated lead finishes by providing groves on the external leads that can control the length of resin bleed are provided. |
1.A molded semiconductor package having an edge to a molded portion of the package, a lead frame having a top surface and a bottom surface, and a thickness therebetween, a plurality of leads, and separated from the edge of the package Blocking the rod, the package contains:At least one pair of adjacent leads and a blocking rod having a lateral portion extending between the adjacent leads;a first barrier in the form of a groove formed on the top surface of each of the at least one pair of adjacent leads, wherein the first obstacle is generally spaced apart from the edge of the package, The first obstacle extends to the outward portion of the blocking rod, wherein the first obstacle is configured to prevent one or more of the plurality of constituent parts associated with the molding material from overflowing to the lead On the surface of the outer area.2.A molded semiconductor package according to claim 1, wherein a distance between said first obstacle and said package edge is 100 μm.3.The molded semiconductor package according to claim 1, wherein the shape of the groove is selected from the group consisting of a rectangular shape, a circular shape 1, a circular shape 2, a triangular shape 1, a triangular shape 2, a "V" shape, and " A W" shape and a list of rough surfaces formed by laser milling or embossing.4.The molded semiconductor package of claim 1, wherein the rectangular shape and the groove of the circular shape 1 are formed by laser milling, stamping, stamping or etching.5.The molded semiconductor package of claim 1, wherein the recess of the circular shape 2 is formed by laser milling, stamping or etching.6.The molded semiconductor package according to claim 1, wherein said triangular shape 1, said triangular shape 2, said "V" shaped shape, and said "W" shaped groove are formed by laser milling or stamping. .7.An integrated circuit in a molded package, the package having an edge to a molded portion of the package, a lead frame having a top surface and a bottom surface, and a thickness therebetween, and a lead frame and the The edge of the package separates the rod, the integrated circuit comprising:At least one pair of adjacent leads and a blocking rod having a lateral portion extending between the adjacent leads;a first obstruction in the form of a groove formed on a top surface of the plugging rod and the at least one pair of adjacent leads, wherein the first obstruction is generally associated with the edge of the package and the plug An outward portion of the live bar is spaced apart from the outer region of the lead frame, wherein the first obstacle is configured to prevent one or more of the plurality of components associated with the molding material from spilling into the On the surface of the outer region of the lead.8.The integrated circuit of the molded package of claim 7, wherein a spacing between the first barrier and the edge of the package is 100 microns.9.The integrated circuit of the molded package of claim 7, wherein the shape of the groove is selected from the group consisting of a rectangular shape, a circular shape 1, a circular shape 2, a triangular shape 1, a triangular shape 2, and a "V" shape. A list of shapes, "W" shapes, and rough surfaces formed by laser milling or embossing.10.The integrated circuit of the molded package of claim 7, wherein the rectangular shape and the groove of the circular shape 1 are formed by laser milling, stamping, stamping or etching.11.The integrated circuit in a molded package according to claim 7, wherein the groove of the circular shape 2 is formed by laser milling, stamping or etching.12.The integrated circuit of the molded package of claim 7, wherein the triangular shape 1, the triangular shape 2, the "V" shape, and the "W" shaped groove are laser milled or stamped. form. |
Method and apparatus for stopping resin overflow and mold flash on integrated circuit lead productsTechnical fieldThe present invention relates generally to semiconductor devices and, more particularly, to a method and apparatus for controlling the spillage of molded resin on a lead.Background techniqueThis invention relates to the assembly and packaging of integrated circuit devices and, more particularly, to providing such devices with leadframes for stopping resin spillage and die flash on integrated circuit leads.An integrated circuit in the form of a semiconductor chip is first attached to a support pad of a lead frame. Contacts or bond pads on the semiconductor device are then attached to corresponding contact pads on the ends of the leads by wire bonds, respectively.After the wire bonding operation is completed, the lead frame is placed in the mold. The container provides some thermally insulating molding material to the mold. Molded material is injected into the mold to encapsulate the circuit.Those skilled in the art have discovered that it is beneficial to form the leadframe in a continuous strip. Each leadframe strip has an integrated circuit device attached to a support pad as mentioned above. The support pad itself is supported by two parallel side rails. Each side rail is located in the plane of the leadframe and is located on the opposite side of the die pad.In the molding operation, the mold cavities are formed around the lead frame to be in close proximity to and sealed to themselves and to the stem. The blocking rod has a lateral portion that extends between pairs of adjacent leads. Blocking the rod limits the flow of packaging material from the enclosed lead frame. After encapsulation, a portion of the mold flash that protrudes between the blocking rod and the adjacent lead is removed by the punch. The press is a typical metal press that easily cuts the metal from the rod and also removes a portion of the protruding mold flash between the leads from the lead frame.Some excess resin can eventually coat portions of the leads during the molding operation. This resin affects the formation and electrical conductivity of the soldered profile of the leads when soldered to the board. Too much resin is called "resin-bleed". Resin spills can be rendered transparent and are referred to as "clear-bleed," or as visible residues often referred to as "mold-flash." Chemical deburring and media deburring are commonly used in the industry to remove excess resin from the leads.Accordingly, a need has arisen for an improved leadframe for producing packaged integrated circuits having leads with limited die flash or resin spillage without the need to remove die flash and resin spillage.Summary of the inventionThe summary provided below is intended to provide a basic understanding of one or more aspects of the invention. This Summary is not an extensive overview of the invention, and is not intended to identify key or essential elements of the invention. Rather, the primary purpose of the summary is to introduce some aspects of the invention in a simplified form.In accordance with an embodiment of the present application, a molded semiconductor package is provided. The molded semiconductor package includes: a molded semiconductor package having an edge to a molded portion of the package, a lead frame having a top surface and a bottom surface, and a thickness therebetween, a plurality of leads, and an edge of the package a separate blocking rod, the package comprising: at least one pair of adjacent leads and a blocking rod having a lateral portion extending between adjacent leads; a first obstacle in the form of a groove formed in at least one pair Adjacent to a top surface of each of the leads, wherein the first obstruction is generally spaced apart from the edge of the package, the first obstruction extending to block an outward portion of the stem, wherein the first obstruction is configured to prevent the die One or more of the plurality of constituent parts associated with the raw material spill over onto the surface of the outer region of the lead.In accordance with an embodiment of the present application, an integrated circuit in a molded package is provided. An integrated circuit in a molded package includes: an integrated circuit in a molded package having an edge to a molded portion of the package, a lead frame having a top surface and a bottom surface, and a thickness therebetween, a lead and a blocking rod separate from the edge of the package, the integrated circuit comprising: at least one pair of adjacent leads and a blocking rod having a lateral portion extending between adjacent leads; a first obstacle in the form of a groove formed in Blocking a top surface of the rod and at least one pair of adjacent leads, wherein the first obstacle is generally spaced apart from the outer edge of the package edge and the blocking rod by an outer portion of the lead frame, wherein the first obstacle is configured to One or more of the plurality of constituent parts associated with the molding material are prevented from overflowing onto the surface of the outer region of the lead.DRAWINGSBRIEF DESCRIPTION OF THE DRAWINGS Fig. 1 is a top plan view showing an exemplary package of a groove region at the same time when grooves are formed at both ends of all the leads.Figure 2 is a top plan view of an exemplary package detailing the groove regions when grooves are created on each of the leads, respectively.Figure 3 is a cross-sectional view of the package of Figures 1 and 2 taken at A: A' for projecting the groove region.Figure 4 is a top plan view of an exemplary package lead set with the stems removed from the leads.Figure 5 is a plan view of the groove selection.In the figures, like reference numerals are sometimes used to designate similar structural elements. It is also to be understood that the description in the drawingsDetailed waysThe invention is described with reference to the drawings. The figures are not drawn to scale and are provided for illustration only. For purposes of illustration, several aspects of the invention are described below with reference to example applications. It will be appreciated that numerous specific details, relationships, and methods are set forth in order to provide an understanding of the invention. However, it will be readily apparent to those skilled in the art that the present invention may be practiced without one or more of the specific details. In other instances, well known structures and operations have not been shown in detail in order to avoid obscuring the invention. The present invention is not limited by the order of the acts or events shown, as some acts may occur in different sequences and/or concurrently with other acts or events. Moreover, not all of the acts or events required in accordance with the methods of the present invention are shown.There is often a need for a rough lead frame (LF) technology that reliably improves the performance of integrated circuit packages and allows for improved reliability performance. This technique is effective in improving the delamination performance under stress, but it gives some manufacturing challenges during assembly. The main problem solved by the present invention is that, due to the nature of the rough LF, the resin from the molding material used in the package consists of a plurality of components which are easy to overflow outside the mold area during the molding process, and in some cases continue to travel. Beyond the second bend of the lead, this creates two types of problems: solderability issues during the SMT of the customer and visual recognition during the SMT process (which is caused by the darker appearance of the leads when there is resin spillage). The focus of the present invention is to minimize resin spillage effects to avoid the problems mentioned above.The present invention relates to the design of a recess on the outer lead of a rough leadframe that controls the length of resin spillage onto the lead. Too much resin is called "resin-bleed". Resin spills can be manifested as transparent materials and are referred to as "clear-bleed," or as opaque residues, which are commonly referred to as "mold-flash."Chemical deburring and dielectric deburring processes are commonly used to remove resin from the surface of the leads. The solution provided in the present invention does not require such an additional deburring process because the resin will be stopped by the disclosed method and apparatus and may exhibit cost avoidance for assembly/test (A/T) operations.The recessed regions can be formed during the laser milling, stamping, stamping or etching process of the leadframe. Laser milling is the process of removing metal from a leadframe using a laser. Stamping is a process in which a lead frame is formed using a set of dies and punches. Embossing is a process of flattening the lead fingers of a lead frame for wire bonding purposes. Additional steps to increase stamping on the outer leads are possible. Etching is a chemical process that etches away metal by chemical solution.1 shows a molded semiconductor package; a molded portion of a package having an edge, a lead frame having a top surface and a bottom surface, and a thickness therebetween, a plurality of leads, and separate leads and packages The edges are separated to block the rod. The package also has at least one pair of adjacent leads from the plurality of leads and a blocking bar having a lateral portion extending between adjacent leads.A first obstruction in the form of a groove is formed on a top surface of the plugging rod and at least one pair of adjacent leads in the first selection as shown in FIG. 1, or at least one pair of adjacent leads as shown in FIG. On the surface of each of the surfaces, wherein the first obstacle is separated from the molded portion of the package by the spacing between the edges of the package. The first obstacle can be further extended to block the outer portion of the rod or even overlap the blocking rod. The first barrier is configured to prevent one or more of the plurality of components associated with the molding material from spilling onto the surface of the outer region of the lead.The shaded area in Fig. 3 and the outline area in Figs. 1 and 2 show example dimensions of the grooved area. "W" on Figures 1, 2 and 3 shows the width of the groove region, which can be 200 microns (um). "L" on Figs. 1 and 3 shows the length of the groove area when all the leads and the ends of the plug are formed with grooves. "L" on Figs. 2 and 3 shows the length of the groove area when grooves are respectively formed on each of the leads. The length "L" of Fig. 2 can be the same as or slightly narrower than the width of the lead. "H" in Fig. 3 shows the height of the groove area, which can be one-eighth of the thickness of the lead, and "A" in Figs. 1, 2 and 3 shows the groove area with respect to the molding area. The distance of the edge offset, which can be 100um. The dimensions listed above can vary depending on the type of package used (such as SOIC, QFP, TSSOP, or others).Figure 4 illustrates an exemplary set of leads including grooves after the stem has been removed from the molded package.Figure 5 shows a list of possible grooves that can be implemented by the present invention. The list also lists the processes that can make grooves (such as stamping, stamping, and etching) that are related to each type of groove. This list is exemplary and is not a complete list of all possible groove forms.While the various embodiments of the invention have been described, Various modifications to the disclosed embodiments can be made in accordance with the disclosure herein without departing from the spirit and scope of the invention. Therefore, the breadth and scope of the present invention should not be limited by the embodiments described above. Rather, the scope of the invention should be limited by the appended claims and their equivalents. |
On a mobile communication device there are many more possible workflows that could be followed given the available functions of that device. These may include, but are not limited to, 'click to call', 'click to locate', 'click to SMS', 'click to send a picture', 'click to handle later', and can be constrained only by the available and accessible functionality of the user's device. A list of actions to be made available associated with an advertisement are provided along with an iconic visual representation of those actions for the user to identify what the resultant workflow will be if they activate the action. The list can be presented as selectable actions within the advertisement, on a sub menu activated by a dedicated device key or assigned softkey, or directly activated by using dedicated device keys or assigned soft keys, or other user- to-device interaction methods. |
A method for distributing advertisement content to a mobile communication device, comprising: identifying a plurality of advertisement actions, each action associated with a communication function of a mobile communication device and each action associated respectively with one of a plurality of icons; selecting an advertisement action from the plurality of advertisement actions based upon availability of the associated communication function for the mobile communication device and accessibility of an advertiser target by the associated communication function; and sending an advertisement associated with the advertisement icon to the mobile communication device for presentation. The method of claim 1, further comprising: predicting a royalty based on a value set for each communication function; and selecting one of a subset of acceptable communication function based on maximizing the predicted royalty. The method of claim 1, further comprising receiving tracked results of presentation of the advertisement from the mobile communication device. The method of claim 1, further comprising facilitating the associated communication function to communicate with the advertising target in response to a user selection of the icon of the advertisement. The method of claim 4, further comprising receiving tracked results of user interaction with the icon of the advertisement. The method of claim 4, wherein facilitating the associated communication function further comprises click to call. The method of claim 4, wherein facilitating the associated communication function further comprises click to wireless access protocol browser. The method of claim 4, wherein facilitating the associated communication function further comprises click to brochure. The method of claim 4, wherein facilitating the associated communication function further comprises click to email. The method of claim 4, wherein facilitating the associated communication function further comprises click to landing. The method of claim 4, wherein facilitating the associated communication function further comprises click to clip. The method of claim 4, wherein facilitating the associated communication function further comprises click to forward. The method of claim 4, wherein facilitating the associated communication function further comprises click to content. The method of claim 4, wherein facilitating the associated communication function further comprises click to message. The method of claim 4, wherein facilitating the associated communication function further comprises click to locate. The method of claim 4, wherein facilitating the associated communication function further comprises click to promotion. The method of claim 4, wherein facilitating the associated communication function further comprises click to coupon. The method of claim 4, wherein facilitating the associated communication function further comprises click to buy. At least one processor for distributing advertisement content to a mobile communication device, comprising: a module for identifying a plurality of advertisement actions, each action associated with a communication function of a mobile communication device and each action associated respectively with one of a plurality of icons; a module for selecting an advertisement action from the plurality of advertisement actions based upon availability of the associated communication function for the mobile communication device and accessibility of an advertiser target by the associated communication function; and a module for sending an advertisement associated with the advertisement icon to the mobile communication device for presentation. A computer program product for distributing advertisement content to a mobile communication device, comprising: a computer-readable medium comprising, at least one instruction for causing a computer to identify a plurality of advertisement actions, each action associated with a communication function of a mobile communication device and each action associated respectively with one of a plurality of icons; at least one instruction for causing a computer to select an advertisement action from the plurality of advertisement actions based upon availability of the associated communication function for the mobile communication device and accessibility of an advertiser target by the associated communication function; and at least one instruction for causing a computer to send an advertisement associated with the advertisement icon to the mobile communication device for presentation. An apparatus for distributing advertisement content to a mobile communication device, comprising: means for identifying a plurality of advertisement actions, each action associated with a communication function of a mobile communication device and each action associated respectively with one of a plurality of icons; means for selecting an advertisement action from the plurality of advertisement actions based upon availability of the associated communication function for the mobile communication device and accessibility of an advertiser target by the associated communication function; and means for sending an advertisement associated with the advertisement icon to the mobile communication device for presentation. An apparatus for distributing advertisement content to a mobile communication device, comprising: an editing computing platform comprising a graphical user interface for identifying a plurality of advertisement actions, each action associated with a communication function of a mobile communication device and each action associated respectively with one of a plurality of icons, and for selecting an advertisement action from the plurality of advertisement actions based upon availability of the associated communication function for the mobile communication device and accessibility of an advertiser target by the associated communication function; and a communication module for sending an advertisement associated with the advertisement icon to the mobile communication device for presentation. The apparatus of claim 22, further comprising a processor of a marketing computing platform for predicting a royalty based on a value set for each communication function, and for selecting one of a subset of acceptable communication function based on maximizing the predicted royalty. The apparatus of claim 22, further comprising the processor for receiving tracked results of presentation of the advertisement from the mobile communication device. The apparatus of claim 24, further comprising the processor for facilitating the associated communication function to communicate with the advertising target in response to a user selection of the icon of the advertisement. The apparatus of claim 25, further comprising the processor for receiving tracked results of user interaction with the icon of the advertisement. The apparatus of claim 25, wherein the processor for facilitating the associated communication function further comprises click to call. The apparatus of claim 25, wherein the processor for facilitating the associated communication function further comprises click to wireless access protocol browser. The apparatus of claim 25, wherein the processor for facilitating the associated communication function further comprises click to brochure. The apparatus of claim 25, wherein the processor for facilitating the associated communication function further comprises click to email. The apparatus of claim 25, wherein the processor for facilitating the associated communication function further comprises click to landing. The apparatus of claim 25, wherein the processor for facilitating the associated communication function further comprises click to clip. The apparatus of claim 25, wherein the processor for facilitating the associated communication function further comprises click to forward. The apparatus of claim 25, wherein the processor for facilitating the associated communication function further comprises click to content. The apparatus of claim 25, wherein the processor for facilitating the associated communication function further comprises click to message. The apparatus of claim 25, wherein the processor for facilitating the associated communication function further comprises click to locate. The apparatus of claim 25, wherein the processor for facilitating the associated communication function further comprises click to promotion. The apparatus of claim 25, wherein the processor for facilitating the associated communication function further comprises click to coupon. The apparatus of claim 25, wherein the processor for facilitating the associated communication function further comprises click to buy. A method for a mobile communication device to implement advertisement content, comprising: incorporating a plurality of advertisement actions, each action associated with a communication function of a mobile communication device and each action associated respectively with one of a plurality of icons; receiving a selection for an advertisement action from the plurality of advertisement actions based upon availability of the associated communication function for the mobile communication device and accessibility of an advertiser target by the associated communication function; receiving an advertisement associated with the advertisement icon at the mobile communication device for presentation; and implementing the selected advertisement action in response to an input by a user via a user interface of the mobile communication device interacting with the advertisement. The method of claim 40, further comprising: predicting a royalty based on a value set for each communication function; and selecting one of a subset of acceptable communication function based on maximizing the predicted royalty. The method of claim 40, further comprising tracking results of presentation of the advertisement to the user. The method of claim 40, further comprising implementing the associated communication function by initiating communication with the advertising target in response to a user selection of the icon of the advertisement. The method of claim 40, further comprising tracking results of user interaction with the icon of the advertisement. The method of claim 40, wherein implementing the associated communication function further comprises click to call. The method of claim 40, wherein implementing the associated communication function further comprises click to wireless access protocol browser. The method of claim 40, wherein implementing the associated communication function further comprises click to brochure. The method of claim 40, wherein implementing the associated communication function further comprises click to email. The method of claim 40, wherein implementing the associated communication function further comprises click to landing. The method of claim 40, wherein implementing the associated communication function further comprises click to clip. The method of claim 40, wherein implementing the associated communication function further comprises click to forward. The method of claim 40, wherein implementing the associated communication function further comprises click to content. The method of claim 40, wherein implementing the associated communication function further comprises click to message. The method of claim 40, wherein implementing the associated communication function further comprises click to locate. The method of claim 40, wherein implementing the associated communication function further comprises click to promotion. The method of claim 40, wherein implementing the associated communication function further comprises click to coupon. The method of claim 40, wherein implementing the associated communication function further comprises click to buy. At least one processor for a mobile communication device to implement advertisement content, comprising: a module for incorporating a plurality of advertisement actions, each action associated with a communication function of a mobile communication device and each action associated respectively with one of a plurality of icons; a module for receiving a selection for an advertisement action from the plurality of advertisement actions based upon availability of the associated communication function for the mobile communication device and accessibility of an advertiser target by the associated communication function; a module for receiving an advertisement associated with the advertisement icon at the mobile communication device for presentation; and a module for implementing the selected advertisement action in response to an input by a user via a user interface of the mobile communication device interacting with the advertisement. A computer program product for a mobile communication device to implement advertisement content, comprising: a computer-readable medium comprising, at least one instruction for causing a computer to incorporate a plurality of advertisement actions, each action associated with a communication function of a mobile communication device and each action associated respectively with one of a plurality of icons; at least one instruction for causing the computer to receive a selection for an advertisement action from the plurality of advertisement actions based upon availability of the associated communication function for the mobile communication device and accessibility of an advertiser target by the associated communication function; at least one instruction for causing the computer to receive an advertisement associated with the advertisement icon at the mobile communication device for presentation; and at least one instruction for causing the computer to implement the selected advertisement action in response to an input by a user via a user interface of the mobile communication device interacting with the advertisement. An apparatus for a mobile communication device to implement advertisement content, comprising: means for incorporating a plurality of advertisement actions, each action associated with a communication function of a mobile communication device and each action associated respectively with one of a plurality of icons; means for receiving a selection for an advertisement action from the plurality of advertisement actions based upon availability of the associated communication function for the mobile communication device and accessibility of an advertiser target by the associated communication function; means for receiving an advertisement associated with the advertisement icon at the mobile communication device for presentation; and means for implementing the selected advertisement action in response to an input by a user via a user interface of the mobile communication device interacting with the advertisement. An apparatus for a mobile communication device to implement advertisement content, comprising: local storage for incorporating a plurality of advertisement actions, each action associated with a communication function of a mobile communication device and each action associated respectively with one of a plurality of icons, and for receiving a selection for an advertisement action from the plurality of advertisement actions based upon availability of the associated communication function for the mobile communication device and accessibility of an advertiser target by the associated communication function; a communication module for receiving an advertisement associated with the advertisement icon at the mobile communication device for presentation; and a user interface for receiving an input by a user interacting with the advertisement; and a processor for implementing the selected advertisement action in response to the user input. The apparatus of claim 61, further comprising the processor for predicting a royalty based on a value set for each communication function, and for selecting one of a subset of acceptable communication function based on maximizing the predicted royalty. The apparatus of claim 61, further comprising the processor tracking results of presentation of the advertisement to the user. The apparatus of claim 61, further comprising the processor for implementing the associated communication function by initiating communication with the advertising target in response to a user selection of the icon of the advertisement. The apparatus of claim 61, further comprising the processor tracking results of user interaction with the icon of the advertisement. The apparatus of claim 61, wherein the processor implementing the associated communication function further comprises click to call. The apparatus of claim 61, wherein the processor for implementing the associated communication function further comprises click to wireless access protocol browser. The apparatus of claim 61, wherein the processor for implementing the associated communication function further comprises click to brochure. The apparatus of claim 61, wherein the processor for implementing the associated communication function further comprises click to email. The apparatus of claim 61, wherein the processor for implementing the associated communication function further comprises click to landing. The apparatus of claim 61, wherein the processor for implementing the associated communication function further comprises click to clip. The apparatus of claim 61, wherein the processor for implementing the associated communication function further comprises click to forward. The apparatus of claim 61, wherein the processor for implementing the associated communication function further comprises click to content. The apparatus of claim 61, wherein the processor for implementing the associated communication function further comprises click to message. The apparatus of claim 61, wherein the processor for implementing the associated communication function further comprises click to locate. The apparatus of claim 61, wherein the processor for implementing the associated communication function further comprises click to promotion. The apparatus of claim 61, wherein the processor for implementing the associated communication function further comprises click to coupon. The apparatus of claim 61, wherein the processor for implementing the associated communication function further comprises click to buy. |
CA 02714893 2010-07-30 WO 2009/099879 PCT/US2009/032385 MULTIPLE ACTIONS AND ICONS FOR MOBILE ADVERTISING CLAIM OF PRIORITY UNDER 35 U.S.C. 119 [0001] This Application for Patent claims the benefit of U.S. Provisional Application Serial No. 61/025,624 filed on 01 February 2008 entitled "ICONS FOR MOBILE ADVERTISING," the disclosure of which is hereby incorporated by reference in their entirety. BACKGROUND [0002] Aspects disclosed herein pertain to a communication network that distributes and tracks advertisements presented on a mobile communication device, and in particular, to providing a marketplace platform that serves as a bridge between advertising platforms and a population of mobile communication devices for targeting and tracking particular advertisements suitably formatted and timed for a user of a mobile communication device. [0003] For many years, companies have tried to brand their products, satisfy existing consumers, and reach potential new consumers through traditional means. The evolution has been linear when less creative, and sometimes non-linear, when more creative, as advertising has gone from print forms like newspapers, magazines, brochures, newsletters, press releases and billboards, to event-related activities, like sponsorships, seminars, point-of-sale and promotional programs, to broadcast media, like radio, television, cable and recently satellite cable. [0004] In recent years, there has been a rise of advertising that is more targeted and tailored to individual consumers, with new forms of previously so-called direct advertising. New endeavors have sought to interact directly with consumers through pull campaigns and push campaigns, and make advertising more measurable to bring advertisers specific consumer data mining bearing on consumer buying habits, trending and predicting future habits. Advances in technology outlets combined with marketing ingenuity have expanded the old direct mail marketing campaigns into new branches, including telemarketing, point-of-sale campaigns, computer platforms, and most recently distribution and measurement through telecommunications networks. CA 02714893 2010-07-30 WO 2009/099879 PCT/US2009/032385 [0005] With respect to the latter, perhaps the greatest platform for the new world of marketing has been the same as the greatest platform for information exchange in the last decade, namely the Internet. Through such avenues as branded websites, banner ads, pop-up ads, targeted e-mails, portal sponsorships, to name a few examples, advertisers have been able to hone in on target audiences. Through defined metrics and innovative semantics, like served impressions, click-through rate (CTR), cost per action (CPA), cost per click (CPC), cost per sale (CPS), and cost per thousand (CPM), to name a few, advertisers have been able to measure the results of targeted ads and objectively set fees for performance results obtained. Along with these new advances, and because of the increasingly cosmopolitan nature of business, geopolitics, and integrated telecommunications networks, so too has advertising become increasingly global in nature. [0006] Along with advances in personal computing that enabled expansion of Internet advertising (e.g., desktop and notebook computers and broadband Internet access), advances in technology have also resulted in smaller and more powerful personal computing devices. For example, there currently exist a variety of portable personal computing devices, including wireless computing devices, such as portable wireless telephones, personal digital assistants (PDAs) and paging devices that are each small, lightweight, and can be easily carried by users. With advances in computing technology, consumers are increasingly offered many types of electronic devices ("user equipment") that can be provisioned with an array of software applications. Distinct features such as email, Internet browsing, game playing, address book, calendar, media players, electronic book viewing, voice communication, directory services, etc., increasingly are selectable applications that can be loaded on a multi- function device such as a smart phone, portable game console, or hand-held computer. [0007] Even with these advances, mobile communication devices tend to have communication bandwidth, processing, and user interface constraints over general purpose computing devices. For example, the screen size, amount of available memory and file system space, amount of input and output capabilities and processing capability may each be limited by the small size of the device. Because of such severe resource constraints, it is desirable, for example, to maintain a limited size and quantity of software applications and other information residing on such remote personal computing devices, e.g., client devices. As such, the computing platforms for such CA 02714893 2010-07-30 WO 2009/099879 PCT/US2009/032385 devices are often optimized for a particular telephone chipset and user interface hardware. [0008] Limited attempts to extend advertising to mobile communication devices have generally followed the paradigm of Internet browsing. Given the differences in how a user chooses to use a mobile communication device and given its limitations, such mobile web advertising has been of marginal quantity and value to advertisers. SUMMARY [0009] The following presents a simplified summary in order to provide a basic understanding of some aspects of the disclosed versions. This summary is not an extensive overview and is intended to neither identify key or critical elements nor delineate the scope of such versions. Its purpose is to present some concepts of the described versions in a simplified form as a prelude to the more detailed description that is presented later. [0010] In one aspect, a method is provided for distributing advertisement content to a mobile communication device. A plurality of advertisement actions are defined, each action associated with a communication function of a mobile communication device and each action associated respectively with one of a plurality of icons. An advertisement action is selected from the plurality of advertisement actions based upon availability of the associated communication function for the mobile communication device and accessibility of an advertiser target by the associated communication function. The advertisement icon is sent with an advertisement to the mobile communication device for presentation. [0011] In another aspect, at least one processor distributes advertisement content to a mobile communication device. A module defines a plurality of advertisement actions, each action associated with a communication function of a mobile communication device and each action associated respectively with one of a plurality of icons. A module selects an advertisement action from the plurality of advertisement actions based upon availability of the associated communication function for the mobile communication device and accessibility of an advertiser target by the associated communication function. A module sends an advertisement associated with the advertisement icon to the mobile communication device for presentation. [0012] In an additional aspect, a computer program product distributes advertisement content to a mobile communication device. A computer-readable CA 02714893 2010-07-30 WO 2009/099879 PCT/US2009/032385 medium comprises at least one instruction for causing a computer to identify a plurality of advertisement actions, each action associated with a communication function of a mobile communication device and each action associated respectively with one of a plurality of icons. At least one instruction for causing the computer to select an advertisement action from the plurality of advertisement actions based upon availability of the associated communication function for the mobile communication device and accessibility of an advertiser target by the associated communication function. At least one instruction for causing the computer to send an advertisement associated with the advertisement icon to the mobile communication device for presentation. [0013] In another additional aspect, an apparatus distributes advertisement content to a mobile communication device. Means are provided for identifying a plurality of advertisement actions, each action associated with a communication function of a mobile communication device and each action associated respectively with one of a plurality of icons. Means are provided for selecting an advertisement action from the plurality of advertisement actions based upon availability of the associated communication function for the mobile communication device and accessibility of an advertiser target by the associated communication function. Means are provided for sending an advertisement associated with the advertisement icon to the mobile communication device for presentation. [0014] In a further aspect, an apparatus distributes advertisement content to a mobile communication device. An editing computing platform comprises a graphical user interface for identifying a plurality of advertisement actions, each action associated with a communication function of a mobile communication device and each action associated respectively with one of a plurality of icons, and for selecting an advertisement action from the plurality of advertisement actions based upon availability of the associated communication function for the mobile communication device and accessibility of an advertiser target by the associated communication function. A communication module sends an advertisement associated with the advertisement icon to the mobile communication device for presentation. [0015] In yet one aspect, a method is provided for a mobile communication device to implement advertisement content. A plurality of advertisement actions are received, each action associated with a communication function of a mobile communication device and each action associated respectively with one of a plurality of icons. A selection for an advertisement action is received from the plurality of advertisement CA 02714893 2010-07-30 WO 2009/099879 PCT/US2009/032385 actions based upon availability of the associated communication function for the mobile communication device and accessibility of an advertiser target by the associated communication function. An advertisement associated with the advertisement icon is received at the mobile communication device for presentation. The selected advertisement action is implemented in response to an input by a user via a user interface of the mobile communication device interacting with the advertisement. [0016] In yet another aspect, at least one processor for a mobile communication device implements advertisement content. A module receives a plurality of advertisement actions, each action associated with a communication function of a mobile communication device and each action associated respectively with one of a plurality of icons. A module receives a selection for an advertisement action from the plurality of advertisement actions based upon availability of the associated communication function for the mobile communication device and accessibility of an advertiser target by the associated communication function. A module receives an advertisement associated with the advertisement icon at the mobile communication device for presentation. A module implements the selected advertisement action in response to an input by a user via a user interface of the mobile communication device interacting with the advertisement. [0017] In yet an additional aspect, a computer program product for a mobile communication device implements advertisement content by having a computer- readable medium that comprises at least one instruction for causing a computer to incorporate a plurality of advertisement actions, each action associated with a communication function of a mobile communication device and each action associated respectively with one of a plurality of icons. At least one instruction for causing the computer to receive a selection for an advertisement action from the plurality of advertisement actions based upon availability of the associated communication function for the mobile communication device and accessibility of an advertiser target by the associated communication function. At least one instruction for causing the computer to receive an advertisement associated with the advertisement icon at the mobile communication device for presentation. At least one instruction for causing the computer to implement the selected advertisement action in response to an input by a user via a user interface of the mobile communication device interacting with the advertisement. CA 02714893 2010-07-30 WO 2009/099879 PCT/US2009/032385 [0018] In yet another additional, an apparatus for a mobile communication device implements advertisement content. Means are provided for incorporating a plurality of advertisement actions, each action associated with a communication function of a mobile communication device and each action associated respectively with one of a plurality of icons. Means are provided for receiving a selection for an advertisement action from the plurality of advertisement actions based upon availability of the associated communication function for the mobile communication device and accessibility of an advertiser target by the associated communication function. Means are provided for receiving an advertisement associated with the advertisement icon at the mobile communication device for presentation. Means are provided for implementing the selected advertisement action in response to an input by a user via a user interface of the mobile communication device interacting with the advertisement. [0019] In yet a further aspect, an apparatus for a mobile communication device implements advertisement content. Local storage receives a plurality of advertisement actions, each action associated with a communication function of a mobile communication device and each action associated respectively with one of a plurality of icons, and for receiving a selection for an advertisement action from the plurality of advertisement actions based upon availability of the associated communication function for the mobile communication device and accessibility of an advertiser target by the associated communication function. A communication module receiving an advertisement associated with the advertisement icon at the mobile communication device for presentation. A user interface receives an input by a user interacting with the advertisement. A processor implements the selected advertisement action in response to the user input. [0020] To the accomplishment of the foregoing and related ends, one or more versions comprise the features hereinafter fully described and particularly pointed out in the claims. The following description and the annexed drawings set forth in detail certain illustrative aspects and are indicative of but a few of the various ways in which the principles of the versions may be employed. Other advantages and novel features will become apparent from the following detailed description when considered in conjunction with the drawings and the disclosed versions are intended to include all such aspects and their equivalents. CA 02714893 2010-07-30 WO 2009/099879 PCT/US2009/032385 BRIEF DESCRIPTION OF THE DRAWINGS [0021] FIG. 1 illustrates a communication system for deploying action icons to mobile communication devices, according to one aspect; [0022] FIG. 2 illustrates a methodology for deploying action icons to mobile communication devices, according to another aspect6; [0023] FIG. 3 illustrates a block diagram of an end-to-end mobile advertising communication system, according to yet another aspect; [0024] FIG. 4 illustrates a timing diagram of a mobile device, marketplace platform, and advertising platform of the end-to-end mobile advertising communication system, according to another aspect; [0025] FIG. 5 is a schematic diagram of an illustrative end-to-end mobile advertising communication system, according to still another aspect; [0026] FIG. 6 is a diagram of an illustrative graphical user interface for campaign management of the communication system of FIG. 5, according to yet another aspect; [0027] FIG. 7 is a block diagram of a mobile communication device of FIG. 5, according to one aspect; [0028] FIG. 8 is a flow diagram of a methodology for mobile communication device advertising performed by the communication system of FIG. 5, according to another aspect. [0029] FIG. 9 is a flow diagram of a methodology for end-to-end mobile advertising, according to yet another aspect. [0030] FIG. 10 is a flow diagram of a methodology for location-informed behavioral profiling of the methodology of FIG. 9, according to one aspect. [0031] FIG. 11 is a flow diagram of a methodology for reach-frequency-time advertising of the methodology of FIG. 7, according to one aspect [0032] FIG. 12 is a flow diagram of a methodology for interceptor micro- targeting advertising of the methodology of FIG. 7, according to another aspect. [0033] FIG. 13 is a flow diagram of a methodology for timed coupon advertising of the methodology of FIG. 9, according to still another aspect. [0034] FIG. 14 is a flow diagram of a methodology for selecting icon actions for a mobile communication device, according to one aspect. [0035] FIG. 15 is a flow diagram of a selecting a publicly viewed advertisement based upon sensed demographics of a viewing audience, according to one aspect. CA 02714893 2010-07-30 WO 2009/099879 PCT/US2009/032385 [0036] FIG. 16 is a flow diagram for consumer to consumer advertising, according to one aspect. [0037] FIG. 17 is a block diagram of a network distribution device having modules in computer-readable storage medium executed by at least one processor for distributing advertisement content to a mobile communication device, according to one aspect. [0038] FIG. 18 is a block diagram of a mobile communication device having modules in computer-readable storage medium executed by at least one processor for implementing advertisement, according to one aspect. DETAILED DESCRIPTION [0039] On the Internet, the resultant single workflow from activating an advertisement, can be viewed within a host web page in a web browser, is to launch a landing page within same or a new instance of the web browser. On a mobile communication device there are many more possible workflows that could be followed given the available functions of that device. These may include, but are not limited to, "click to call", "click to locate", "click to SMS", "click to send a picture", "click to handle later", and are constrained only by the available and accessible functionality of the user's device. A list of actions to be made available associated with an advertisement are provided along with an iconic visual representation of those actions for the user to identify what the resultant workflow will be if they activate the action. The list can be presented as selectable actions within the advertisement, on a sub menu activated by a dedicated device key or assigned softkey, or directly activated by using dedicated device keys or assigned soft keys, or other user-to-device interaction methods. [0040] Various aspects are now described with reference to the drawings. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of one or more aspects. It may be evident, however, that the various aspects may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to concisely describe these versions. [0041] Additionally, in the subject description, the word "exemplary" is used to mean serving as an example, instance, or illustration. Any aspect or design described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects or designs. Rather, use of the word exemplary is intended to present concepts in a concrete fashion. CA 02714893 2010-07-30 WO 2009/099879 PCT/US2009/032385 [0042] The apparatus and methods are especially well suited for use in wireless environments, but may be suited in any type of network environment, including but not limited to, communication networks, public networks, such as the Internet, private networks, such as virtual private networks (VPN), local area networks, wide area networks, long haul networks, or any other type of data communication network. [0043] Referring to FIG. 1, according to one aspect, a communication system 10 facilitates creation, deployment and tracking by a marketplace platform 12 of an advertising campaign on behalf of an advertiser 14 to a population of mobile communication devices 16. The marketing platform 12 utilizes network communication component 18 to participate in a communication network 20 that forms a communication link, depicted as a wireless data packet air interface 22, between a network radio access technology 24 and an antenna 26 of the mobile communication device 16. The marketplace platform 12 can optimize the advertisement campaign to reflect constraints of the mobile communication device 16, interactions allowed by the advertiser 14, and further royalty-bearing preferences of the advertiser. [0044] Mobile communication devices 16 can have a communication module 28 that is constrained in the types of communications performed. These constraints can be technical or programmatic, the latter referring to subscriber agreements with the communication network 20 or other external factors. For example, the device could support Short Message Service (SMS) (i.e., text messaging), web browser, email, telephone service, etc. In some instances, a user interface 30 of the communication device 16 can impose limitations or preferences as well for types of communication. For example, a display size, available input devices (e.g., dual tone multi- frequency (DTMF) keypad versus a QWERTY keyboard), etc., can make certain interactions with an advertiser feasible or desirable. For aspects in which the communication device 16 is mobile, the air interface 22 can dynamically change, such as reduced throughput situations that warrant changing options for interacting with an advertiser 14. In the depiction of FIG. 1, these constraints/capabilities are illustrated for the advertiser 14 regarding a communication type A 32, type B 34, type C 36 and type D 38, wherein communication type C 36 is not enabled. The constraints/capabilities for the communication device 16 are that communication type B 34 is not being enabled. Thus, an advertisement editor 40 of the marketplace platform 12 can implement a subset of the available communication types A, D 32, 38 for defining an advertising campaign suitable for the communication device 16 to interact with the advertiser 14. CA 02714893 2010-07-30 WO 2009/099879 PCT/US2009/032385 [0045] To enhance the value to the advertiser 14, and thus the royalty-bearing potential for the advertising campaign, a royalty optimization component 42 of the marketplace platform 12 considers that one communication type D 38 has a greater value to the advertiser 14, which is depicted as an advertisement 44 have been created by the editor component 40 including an Action D 46 for accessing communication type D 38 of the communication device 16 as specified by access data 48 specific to the advertiser 14 to perform an interaction as depicted at 49. An icon D 50 paired with the action D 46 intuitively suggests the type of action D 46 to a user. Media content 52 of the advertisement 44 communicate information or excitement to assist in prompting user interaction with the icon D 50. An advertisement selection component 54 and an advertisement tracking component 56 both also resident in memory 58 of the communication device 16 recognize opportunities to present and track respectively the advertisement 44 that satisfy a reach-frequency-time goal for the advertising campaign, reporting both the opportunities presented and user responses in some instances to a campaign distribution tracking component 59 of the marketplace platform 12. [0046] It should be appreciated with the benefit of the present disclosure that action definitions, icon graphics, and assignment of a particular icon graphic to an action definition and to a particular advertisement can be separate communications. For example, a mobile communication device 16 can be configured by the original equipment manufacturer (OER) to have a plurality of selectable actions (e.g., click to call, click to text message, etc.). At some point, each action can be associated with one or more icons that graphically suggest the function. A plurality of such icons for each action can be desirable for different user preferences, style considerations appropriate for a particular advertisement, correlation with other graphical user interfaces that have connotations to the user or to the advertiser, etc. Provisioning the mobile communication device 16 in advance can also yield throughput efficiencies by not requiring icon graphics or action instructions to be resent with each advertisement. [0047] It should be appreciated that the royalty optimization component 42 can address additional factors, such as the likelihood that a user will respond to a type of communication (action). For example, a pattern of usage can indicate that a particular user prefers not to use his communication device for browsing to a website but instead is receptive to click-to-call actions to speak with an advertiser. A higher probability of a user action of a lower royalty-bearing interaction can be the optimum solution for a particular communication device 16. Revenue optimization can be a distributed CA 02714893 2010-07-30 WO 2009/099879 PCT/US2009/032385 function, with each mobile communication device 16 optimizing based on factors such as a value given to each type of action, filtering for actions available on the device 16, and behavioral preferences either implicit or explicitly established for the user, etc. [0048] FIGS. 2, 4, and 8-16 illustrate methodologies and/or flow diagrams in accordance with the claimed subject matter. For simplicity of explanation, the methodologies are depicted and described as a series of acts. It is to be understood and appreciated that the subject innovation is not limited by the acts illustrated and/or by the order of acts. For example, acts can occur in various orders and/or concurrently, and with other acts not presented and described herein. Furthermore, not all illustrated acts may be required to implement the methodologies in accordance with the claimed subject matter. In addition, those skilled in the art will understand and appreciate that the methodologies could alternatively be represented as a series of interrelated states via a state diagram or events. Additionally, it should be further appreciated that the methodologies disclosed hereinafter and throughout this specification are capable of being stored on an article of manufacture to facilitate transporting and transferring such methodologies to computers. The term article of manufacture, as used herein, is intended to encompass a computer program accessible from any computer-readable device, carrier, or media. [0049] Referring to FIG. 2, methodology 60 for multiple actions and icons in mobile advertising is depicted between the marketplace platform 12, advertiser 14, and mobile communication device 16, according to one aspect. An inventory of paired action-icons are maintained that are intuitively explanatory (block 62). Configuration information for a population of mobile communication devices 16 are conveyed to the marketplace platform 12 (block 64). This configuration information can specify what types of advertising actions are feasible, desired, or effective. The advertiser platform 14 provides campaign access data for targeting advertising actions (block 66). Examples can be voice telephone numbers, websites, brochure download links, an email address, etc. Media content can also be conveyed (block 68). For example, trademark logos and advertising slogans can be incorporated. Preferences for actions are conveyed from the advertiser platforml4 to the marketplace platform 12 (block 70). [0050] The marketplace platform 12 determines a subset of actions that are suitable to both the mobile configuration device 16 and to the advertising platform 14 (block 72). Royalty optimizations are performed, which can entail maximizing a royalty to the marketplace platform 12 (block 74). An advertisement is constructed that is suitable for CA 02714893 2010-07-30 WO 2009/099879 PCT/US2009/032385 disseminating to the mobile communication device and which contains advertiser access data, presentation goals for reach-frequency-time for the population of mobile communication devices and a particular targeted device, and action-icon(s) (block 76). The advertisement is distributed to the mobile communication device 16 (block 78). For example, the advertisement can be an image suitable for a requesting/receiving mobile communication device chipset and software platform. [0051] At suitable time, the reach-frequency-time selection component monitors device usage in order to post a new advertisement (block 80). For example, the user can be interacting with the user interface such that there is a reasonable likelihood that the user will perceive the advertisement. This presentation is logged for tracking this likely viewing by the user (block 82). It should be appreciated that viewing can include or be substituted for haptic or audible advertisements. Should the device 16 need additional advertisements, the marketplace platform 12 can periodically or upon request distribute more (block 84). It should be appreciated that the royalties can be based upon the reach-frequency-time tracking, such as for image advertising that does not necessarily result in many or any direct contacts to the advertiser. [0052] Should the user select the action-icon and interact with advertiser, the mobile communication device 16 performs the prescribed action in accordance with the access data to the advertiser platform 14 or wherever this follow-on action is directed (block 86). For example, fulfillment could be facilitated by the network operator. The device 16 or the marketplace platform 12 tracks this user action (block 88), which can include an advertisement report to the marketplace platform (block 90), which in turn calculates a royalty due (block 92) and transmits a royalty due report to the advertiser platform 14 (block 94). [0053] In an exemplary implementation, action icons can be introduced into an end- to-end mobile advertising system that provides a marketplace platform that characterizes user behavior (e.g., location, interaction with advertisements on a mobile communication device, etc.) in order to select micro-targeted advertisements from an advertisement platform. The marketplace platform handles the formatting required for presentation suitable for communication devices. The advertisements are presented in accordance with negotiated tags for a suitable audience ("reach"), for a suitable number of presentations ("frequency") and for an effective duration ("time") within a particular scheduled window. A time coupon advertisement campaign is also supported where advertisement include a schedule metric. Effectiveness is gauged even in the instance CA 02714893 2010-07-30 WO 2009/099879 PCT/US2009/032385 of impression advertisements by monitoring user location and/or interaction with the communication device to determine a change in behavior (e.g., does not go to a competitor as forecasted, does go to a location of the advertiser, calls the advertiser, clips the advertisement for future reference, etc.). This effectiveness is further tracked across applications and/or platforms to capture reach, frequency, and duration of a particular advertising campaign for a user. Not only does the marketplace platform handle the interfacing for the particular format needs of mobile communication devices, the marketplace platform secures user identification for privacy reasons from advertising entities that provide the advertisements. [0054] Referring to FIG. 3, a communication system 100 provides an end-to-end solution for advertisers to extend the reach of their advertising platforms 102 to a population of client devices, depicted as mobile communication devices 104, even though the mobile communication devices 104 have display, communication bandwidth, and user interaction that differ markedly from other communication channels used by the advertising platforms 102, according to one aspect. A marketplace platform 106 provides the interface between the advertising platforms 102 and the mobile communication devices, handling the specific needs of mobile communication devices 104. For example, the marketplace platform 106 includes a formatting component 108 that formats advertisements on behalf of the advertising platform 102 so that the advertisers can maintain one advertising inventory 110 used for other advertising distribution and communication channels (e.g., web portals, etc.). Thus, the advertising platform need not keep up to date with a myriad of presentation constraints for each configuration 112 of mobile communication device 104. Thus, the advertisement can be presented in a suitable rendering with suitable interaction options in accordance with a user interface 114 of the particular mobile communication device 104. [0055] The marketplace platform 106 provides additional value to advertisers by determining a "reach" of the population of mobile devices 104. Not only does the marketplace platform 106 know the capabilities for presentation of advertisements, behavior of the user is sensed via the user interface 114 (e.g., call history, interaction with mobile advertisements, etc.) and/or by a location sensing component 116 of the mobile communication device 104. These behavior indications are reported by an advertising client 118, also resident on the mobile communication device 104. Thereby, the marketplace platform 106 can go beyond "suspect" demographic data about the CA 02714893 2010-07-30 WO 2009/099879 PCT/US2009/032385 mobile communication devices 104 by storing behavioral and demographics data in a database 120. An advertisement forecasting component 122 analyzes this data in order to characterize the directly sensed or interpreted behavior of a user of the mobile communication device 104. [0056] When the mobile communication device 104 needs additional advertisements, the advertising client 118 makes a request, which is forwarded by the marketplace platform 106. While achieving the latter, individual identifications are filtered out with a privacy component 124, such that the advertising platform 102 knows only a characterization of the mobile communication device 104. Alternatively, the marketplace platform 106 has access to a range of advertisements in the advertisement inventory 110 of the advertising platform 102 and utilizes an advertisement micro- targeting component 126 to select appropriate advertisements for the requesting mobile communication device 104 in accordance with a characterization maintained by the advertising forecasting component 122. The mobile communication device 104 presents the advertisement on the user interface 114 and reports the usage via the advertising client 118 to the marketplace platform 106. The data can be processed by a report formatting component 128 in accordance with a data format compatible with the advertising platform 102 so that advertisers can assess the effectiveness of an advertisement campaign. The advertisement tracking data can also be processed by a billing component 130, especially in instances where the amount of payment owed to the marketplace platform 106 is related to the advertisement tracking data. In instances where users have interacted in a way with the user interface 114 indicating a desire to purchase goods or services associated with a presented advertisement, the marketplace platform 106 can provide an advertisement brokered sale component 132, leveraging current billing avenues, authentication methods, and privacy filters in order to facilitate a transaction between the advertising platform 102 and a user of the mobile communication device 104. [0057] The reach, frequency, and time of exposure to advertising can be extended to capture instances in which a user 140 can be exposed to the same advertisement campaign across multiple computing environments (e.g., applications, devices, etc.). For instance, the user 140 interacts with one client device (e.g., mobile communication device 104) whose user interface 114 is capable of presenting multiple applications (e.g., WAP browser, game console, communication device menu, etc.). Alternatively or in addition, the user 140 can interact with a second user interface 142 of another client CA 02714893 2010-07-30 WO 2009/099879 PCT/US2009/032385 device 144 that also has an advertising client 146 that responds to the marketplace platform 106. A persistent reach-frequency-time tracking component 148 of the marketplace platform 106 instructs the mobile communication device 104 and client device 144 and receipts reports as to partial compliance with the exposure metrics in order to determine when an advertising target has been satisfied. [0058] An example of such persistent reach-frequency-time advertising would be a fourteen-year-old boy Joey whom the marketplace platform 106 has determined to be a skateboard enthusiast based upon behavior (e.g., search performed on a WAP browser on the mobile communication device 104, frequent proximity to a skateboard recreation center, solicited opt-in, etc.). A sports shoe manufacturer can have an advertising campaign that promotes use of their product in skateboard events and has selected a classification of users like Joey to receive their advertisements. In particular, the campaign specifies that each recipient of the appropriate inclination (i.e., reach) is to receive the advertisement at least four times (i.e., frequency) for a total of thirty seconds duration (i.e., time). Opportunities to satisfy this exposure metric can be realized in part when Joey selects to play a skateboarding game on his mobile communication device 104. Another portion of the exposure time can occur when Joey accesses a financial webpage to view his stock values. Another opportunity for presenting the advertisement can occur when viewing a home screen of the user interface 114 upon initial activation, implying that Joey is viewing the client device 104. [0059] As another example, a young adult Chris can interact occasionally with a number of different client devices 104, 144 including a personal cell phone with a graphical user interface, a wirelessly enabled portable game console, a cell phone- enabled handheld or tablet device largely used for email, etc. The marketplace platform 106 can be associated with more than one of these devices (not shown), associating their use with the same user, and thus a selected advertising campaign, enabling additional opportunities to complete the required frequency and/or duration of exposure to an advertisement. [0060] In some applications, the user 140 passively interacts with the second client device 144, such as viewing a dynamic public advertisement (e.g., active billboard). Determination of this passive interaction can be determined by the persistent reach- frequency-time tracking component 148 correlating location data from the location sensing component 116 of the mobile communication device 104 with a sensed or predetermined location of the client device 144. This can be micro targeting of CA 02714893 2010-07-30 WO 2009/099879 PCT/US2009/032385 advertising, such as instances in which only one or a few individuals are capable of seeing the dynamic advertising display. Alternatively or in addition, the dynamic public advertisement platform can be a large dynamic display that is simultaneously viewed by a larger population, such as alongside a highway or at a busy pedestrian thoroughfare. A revenue optimizing system for dynamically changing the advertisement presented can benefit from feedback regarding the current demographic and/or behavioral profile characterization of some, many, or all of the viewers. Thus, a generally applicable soft drink advertisement could be the default advertisement presented. [0061] For example, an advertisement event is triggered when twenty users are detected as having a classification as professionals in a certain medical specialty, due to the proximity of a convention or hospital, for which a pharmaceutical or medical device manufacturer is willing to pay a premium advertising rate per capita. As another example, a sporting event then concludes and a large influx of sports fans leave. The sheer number of fans changes the optimum revenue generating advertisement to one with a lower premium per capita, but an overall larger value. The optimization could further take into consideration the relative rate of travel of the population to change the advertisements in a way to provide effective exposure balanced against opportunities to sell additional advertisement time. [0062] The monitoring across computing environments of various applications on a client device 104, or even to other client devices 144 for opportunities to present advertisements can be further leveraged to capture user behavior for reporting to the marketplace platform 106. For example, the user 140 can enter keywords into a WAP browser search engine that are captured. Navigating links provided on a portal webpage can be tracked. Selection of media content, game content, utilities applications for download and use can be tracked. Interactions with certain classes of advertisements that are sent in an untargeted fashion to the population of mobile communication devices 104 can be noted. To the extent permissible, communicating with certain business entities (e.g., telephone calls) can be captured. Thus, the unique interaction forms provided by certain mobile communication devices 104 can enhance behavior profiling of a user for targeted micro advertising. Coordination or control of such keyword characterization can be performed at a cross platform search monitor 150 with functionality provided by the advertisement clients 118 and 146. [0063] A further enhancement to the device UI can be provided by multiple actions, represented by icons, used in conjunction with the user interface 114 that are activated CA 02714893 2010-07-30 WO 2009/099879 PCT/US2009/032385 based upon for the user's choice of response to an advertisement, especially those facilitated by the communication features made available by the mobile communication device 104. Alternatively or in addition, the actions can be selected based on the advertiser's preferences. Alternatively or in addition, the actions can be selected based on a propensity for generating revenue for the marketplace platform 106. [0064] The marketplace platform 106 can utilize a selective advertisement action utility 152 to incorporate such actions and icons and functionality into the advertisement distributed to the mobile communication device 104. For example, some advertisers hope to drive the user to website, to a telephone customer service number, to an email response, a short message service (SMS) text response, a click to buy shopping cart interface (e.g., payment and shipping information handled through the operator's billing contract with the user of the mobile communication device 104). A click-to- coupon action, represented by an icon or other means, can allow the mobile communication device 104 itself to serve as a hand carried "coupon," perhaps presenting a redemption code or rendered barcode for the retailer to accept or for the user to enter online. A click-to-promotion action can allow the marketplace platform 106 to selectively target discounts to particular classes of users, or perhaps an individual user. [0065] Since different kinds of interactions with an advertisement tend to have different value to an advertiser, the selection of actions presented can be placed in a descending order of priority, or could result in a different remuneration value to the marketplace platform 106. For example, a click-to-buy action could have the highest value, although this may be inappropriate for the contractual arrangement with the mobile communication device 106 (e.g., underage youth) or not be suitable for the type of advertisement (e.g., impression advertising for a service). A second tier could be a direct contact with the advertiser (e.g., click-to-call, click-to-email, or click-to-text). A lower tier could be those interactions that show some interest only (e.g., click-to-locate, click-to-content, click-to-save (the advertisement or coupon), etc.). [0066] Although privacy for the users is beneficial for placing the marketplace platform 106 between the advertising platform 102 and the user 140, in some applications a consumer-to-consumer advertising functionality can be facilitated by the communication system 100. The marketplace platform 106 can serve as a broker that makes the introduction for an advertiser to a user 140 who can opt in for direct marketing campaigns. As another example, an individual or association ("trusted entity") 154 can obtain indicia 156 of addressee permission, such as a code or password CA 02714893 2010-07-30 WO 2009/099879 PCT/US2009/032385 that enables access to direct marketing features. For example, a professional association can obtain contractual permission for their organization through registration and negotiate with the marketplace platform 106 for a direct advertisement to their members, such as facilitating acceptance of enrolling in a seminar. As another example, a friend could schedule for a birthday advertisement to be prominently displayed within a circle of friends, providing a higher likelihood of being noticed over other message formats yet without the inconvenience of leaving many voicemails. As yet another example, an advertiser is only willing to provide a special discount to certain users who are in a special status, such as very frequent flyer on a certain airline. A targeted click- to-coupon could be sent to such an individual without making such an offer widely available to those the advertiser chooses to discriminate. [0067] In FIG. 4, a methodology 200 for end-to-end mobile advertising is depicted by interactions between the mobile communication device 104, the marketplace platform 106, and the advertising platform 102, according to one aspect. It should be appreciated that the user 140 can utilize also a client device 144 that need not be mobile with the marketplace platform 106 in some applications coordinating certain of these communication steps with either or both devices 104 and 144. The marketplace platform 106 begins by processing a collection of demographic data in block 202. Such data has value, but is denoted as "suspect" in that users do not always provide accurate or complete self-assessments for a number of reasons. This demographic data is augmented at 204 by location reporting provided by the mobile communication device 104 to the marketplace platform 106. This location data can be approximate, given a current cell or wireless node from which the communication originates. This location data can be accurately determined from a Global Positioning System (GPS) engine incorporated into the mobile communication device 104, sufficiently accurate to identify the location of the user to specific physical addresses. In addition, user behavior is provided by call activity, depicted as reports at 206. This collected user behavior data is analyzed for behavioral profiling at block 208. As used herein, a behavioral profile encompasses the demographic variables, behavior variables, and other information that goes toward IAO variables (i.e., interests, attitudes, and opinions), although it should be appreciated that some applications consistent with aspects herein may be confined to a subset of such variables. [0068] In block 210, the marketplace platform 106 performs a forecast of the advertising market of the mobile communication devices 104. For example, current CA 02714893 2010-07-30 WO 2009/099879 PCT/US2009/032385 advertising usage and the usage of the mobile communication devices 104 overall can be combined with propensity of certain users of mobile communication devices 104 to benefit from a particular advertiser based on the behavioral profiling. This ad forecast can serve as a basis for negotiating an advertisement campaign with the advertising platform 102, as depicted at 212. The campaign can be defined in terms of reach (e.g., a subset of users of mobile communication devices 104 with a high correlation for the goods or services based on behavioral profile), frequency of advertisement presentations to each user, the cumulative viewing time of an advertisement for each selected user, and/or a location limitation for users proximate to a competitor or the advertiser's business locations. An advertisement campaign can be constrained to a particular calendar schedule with limitations on a begin time and/or an end time. The schedule constraint can also comprise a time of day schedule limitation for campaigns that focus on users who are active at a particular time, such as those who would be influenced to visit a restaurant close to dinner time or to attend a concert. The marketplace platform 106 can also provide tracking of advertisement usage that can serve as a valuable feedback tool for the advertisers to determine effectiveness. The tracking can also serve as a basis for valuing the end-to-end mobile advertising services of the marketplace platform 106. [0069] With the advertising campaign set up, when a mobile communication device 104 signals the marketplace platform 106 at 214 that additional advertisements are needed, the marketplace platform 106 requests single-format advertisements from the advertisement platform at 216. The advertising platform 102 provides the single format advertisements at 218. [0070] At block 220, the marketplace platform 106 formats one or more advertisements into a format suitable for the requesting mobile communication device 104. The marketplace platform 106 micro-targets the advertisements to those mobile communication devices 104 that are deemed to have an appropriate behavioral profile. Part of the formatting includes tagging metrics in accordance with the negotiated terms for the advertising campaign. Examples of these tags are frequency of presentation, duration of presentation, schedule window, location constraints, etc. The custom formatted advertisements are sent from the marketplace platform 106 to the mobile communication device 104 at 222. [0071] At 224, the mobile communication device 104 presents the advertisements in accordance with the tagged metrics. The tracking of advertisement usage by the mobile CA 02714893 2010-07-30 WO 2009/099879 PCT/US2009/032385 communication device 104 is reported intermittently to the marketplace platform 106 as depicted at 226. In addition, some aspects include location reporting as depicted at 228. With this advertisement and location tracking, the marketplace platform 106 correlates the advertisement presentation with the location of the user against a database of monitored locations (e.g., competitors, advertiser's business locations, etc.) in order to infer success or failure of impression advertisements. The mobile communication device 104 in some aspects reports call activity as depicted at 232, such as dialed directly by the user or automatically dialed by using a "click to dial" feature of the mobile communication device 104. In some aspects, at 234 the mobile communication device 104 can report advertisement interaction activity (e.g., "click to clip" to save the advertisement for future review by the user, "click to glance" to launch a window to view the advertisement or a more detailed version of the advertisement, "click to locate" to guide the user to the location of the advertiser, etc.). [0072] The tagged metrics can facilitate the user behavior by providing information or active content that direct the user toward the behavior that is to be tracked. In some instances, an advertiser may specify that only certain kinds of user behavior are to be tracked, or certain behaviors are weighted more heavily as indicating an effective advertisement. For example, a click to locate action can be a stronger indication than a click to save, which in turn can be a stronger indication than a location proximity that is not necessarily proof of visiting the advertising business. [0073] At 236, based on the reported usage data, the marketplace platform 106 can have an opportunity to perform a brokered sale with the advertising platform 102 based on certain kinds of user interactions with the advertisement. At 238, based on the reported usage data, the marketplace platform 106 can report depersonalized advertisement tracking data to the advertising platform 102. This depersonalization can summarize the data into a format conforming to the data of interest to the advertiser. The depersonalization can replace individual identification with a categorization of the consumers of the advertisement in order to preserve user privacy. At 240, the marketplace platform 106 can report advertisement billing, such as basing the amount due as corresponding to the usage tracking. [0074] In FIG. 5, an exemplary communication system 300 benefits from a mobile advertisement platform 302 that interfaces between advertiser/agency advertisement serving platforms 304, operators and publishers 306, and a population of mobile communication devices 308, in accordance with one implementation. It should be CA 02714893 2010-07-30 WO 2009/099879 PCT/US2009/032385 appreciated that a particular user 140 (FIG. 3) may use more than one mobile communication device 308, which can be coordinated by the mobile advertisement platform 302 to accomplish certain advertisement objectives. The user can also interact with an immobile client device, depicted as a dynamic public advertisement display (e.g., billboard, television, computer workstation, waiting room display, public conveyance signage, etc.) 309. The mobile communication device 308 provides indications of user interaction (e.g., pattern of movement) that when related to the type of immobile client device 309 can indicate exposure to advertisement. For instance, movement toward a large display is indicative of likelihood of seeing the advertisement. The advertising serving platforms 304 can comprise operator advertising sales 310, mobile advertising sales 312, Internet advertising sales 314, and/or publisher advertising sales 316, etc., whose particular communication protocols are accommodated by an advertisement sales/agency/advertiser interface 318 to communicate with the mobile advertisement platform 302. In some aspects, operators (e.g., wireless/cellular carrier) 306 can perform functions such as billing and assisting in estimating an available population of mobile communication devices 308 by communicating with the mobile advertisement platform 302 via an operator/publisher interface 320. The mobile advertising platform 302 includes a campaign management component 322 that allows an administrator to select appropriate formatting and metric tagging. This campaign management 322 can further include an action management utility 323 that assists in selecting an icon for the action that are suggestive of the types of communication options afforded by mobile communication devices, and assists in defining a workflow invocation command and parameters for the action (e.g., email, direct purchase, call, text message, save, navigate to content, etc.) as well prompting to those options appropriate to the advertiser and/or preferred by the marketplace advertisement platform 302 for potential for revenue generation. [0075] In FIG. 6, in an illustrative graphical user interface 324 includes a general window 326 that enables a user to enter a campaign identification entry field 328 (e.g., 91 4081 9034), a campaign name entry field 330 (e.g., Martin campaign), a campaign status pull-down menu 332 (e.g., planning), a click-to-action link 334 (i.e., uniform resource locator (URL), e.g., http://news.bbc.co.uk), a campaign description entry field 336 (e.g., click to action - listen to streaming BBC world news channel), campaign goals entry field 338 (e.g., target audience, behavioral profile categories K, T, AA, CA 02714893 2010-07-30 WO 2009/099879 PCT/US2009/032385 frequency 5, time duration 45 seconds), and a category pull-down menu 340 (e.g., Arts & Culture - Arts (General)), according to one aspect. [0076] In an exemplary version, both the mobile communication devices 308 are BREW-enabled. The Binary Runtime Environment for Wireless (BREW ) software, developed by QUALCOMM Incorporated of San Diego, California, exists over the operating system of a computing device, such as a wireless cellular phone. BREW can provide a set of interfaces to particular hardware features found on computing devices. As such, the click-to-action link 334 can include a BREW "click URL" or other instructions as to how the user can interact with the advertisement (e.g., click to clip, click to call, click to glance, etc.). [0077] The graphical user interface 324 also provides a specific configuration for a subset of the mobile configuration devices 308 operating with a specific chipset, hardware, and/or software configuration. In an illustrative window 342, the user has selected a mobile advertisement size of 88, which is defined as 88 pixels wide by 18 pixels high. An image selection field 344 allows the campaign administrator to select an image, such as an image provided by the advertiser that has been manually resized or automatically cropped and reduced and/or changed in color palette by the widow 342. Additional text entry field 346 may be used, such as for instructions for displaying how to interact with this advertisement that is specific to this configuration of mobile communication device 308. A text position pull-down menu 348 can position this additional text, or omit it altogether as in given in the example. [0078] Returning to FIG. 5, the customized advertisements from the campaign management component 322 are stored in a real-time inventory database 350. Data provided by operators/publishers 306 can be processed by an inventory forecasting component 351 with forecast data stored in database 350, in accordance with one implementation. A targeting and advertisement selection component 352 matches advertisement requests from the mobile communication devices 308 with the customized advertisements in the inventory database 350. Such targeting can comprise a public advertisement component 353 that selects an advertisement display 355 of the immobile client device 309. The selection can be made based upon passive interaction of the user 140 (FIG. 3) as detected by the mobile communication device 308 moving into proximity of the immobile client device 309. [0079] The communication protocol and advertisement format is translated by a multi-format advertisement serving component 354 to the mobile communication CA 02714893 2010-07-30 WO 2009/099879 PCT/US2009/032385 devices 308. In an illustrative aspect, a Triglet Service Adaptor (TSA) 356 of a uiOneTM delivery system (UDS) 358 performs the multi-format advertising serving function. The uiOneTM architecture developed by QUALCOMM Incorporated as part of BREW provides a set of BREW extensions that enable rapid development of rich and customizable Us (i.e., active content, over-the-air (OTA) up-gradable), helps to evolve download business beyond applications, provides theming of part or entire handset UI, and utilizes BREW UI Widgets. Thus, BREW uiOne reduces the time to market for handsets, carrier customization, and consumer personalization. To do this, the BREW uiOne provides a clear set of abstractions, adding two new layers to the application development stack for BREW. The uiOne delivery system 358 is used to update mobile user interfaces (Uls) 360 over-the-air. This delivery system 358 can be deployed in a standalone fashion, allowing operators to leverage the functionality of their own delivery system. Additional benefits can be realized by deploying uiOne architecture with uiOne delivery system 358, especially when deployed in conjunction with other elements of the BREW solution (e.g. monetization and billing of downloadable UI packages when the operator does not already have the appropriate infrastructure). [00801 It should be appreciated with the benefit of the present disclosure that incorporation of BREW, uiOne, etc., are illustrative and that application consistent with aspects herein can employ other computing environments, mobile operating systems, user interfaces, and communication protocols. For example, the user interfaces 360 can employ JAVA applets and operating environment. [00811 The mobile user interface 360 thus configured in the illustrative version includes a tab A 362 and a tab B 364 (e.g., "mystuff", which can include clipped advertisements subfolder). The depicted tab A 362 is selected, showing options, such as selected Games shopping option 366, an applications ("apps") shopping option 368, a themes shopping option 370, and a shopping search option 372. An advertisement banner advertisement 374 is displayed with additional text 376 (e.g., "#1 to Clip, #2 to Call) explaining how a user can interact with the advertisement 374, such as using a dial tone multi-frequency (DTMF) keypad 378, a dedicated advertisement interaction button (e.g., Clip) 380, and a menu button 382 to reach additional advertisement options perhaps used in conjunction with a steering buttons 384 and a select button 386. An exit button 388 allows backing out of a menu sequence. The advertisement banner 374 can also incorporate one or more icons 375 that graphically communicate what the interaction will perform as well as facilitating the action. Alternatively the icons can be CA 02714893 2010-07-30 WO 2009/099879 PCT/US2009/032385 presented within a menu or icon bar or other platform or implementation specific method. [0082] The mobile communication device 308 provides functions that operate to support and monitor the user interaction with advertisements 374, such as an advertisement cache 390, an advertisement tracking component 392, a contextual targeting component 394, a location monitoring and reporting component 396, and an advertising client 398, which in the illustrative version is a BREW extension. The location monitoring and reporting component 396 can derive location from a Global Positioning System (GPS) 400. Alternatively, radio frequency identification systems, wireless access points, cellular direction finding, etc., can provide approximate location information about a mobile communication device that is temporarily screened from GPS reception or lacks an inherent location sensing capability. Immobile client devices 309 can have a predetermined location value 401 accessed by the mobile advertisement platform 302 rather than a sensed value. This location information can be utilized for public advertising in which passive interaction is surmised by the public advertising component 353 of the mobile advertisement platform 302. [0083] The mobile advertising platform 302 stores the data received from the mobile communication devices 308 in the real-time inventor database 350. A reporting and analytics component 402 summarizes, filters, and formats the data received from the database 350, filtered of individual identification information by an advertisement tracking identifier filter 404. The prepared data is used by a billing component 406 that sends bills to advertising serving platforms 304 and/or by a settlement component 408 that interacts with operators and publishers 306. [0084] Returning to FIG. 6, the window 342 can facilitate advertisement action and icon selection that is appropriate for the capabilities of the type of mobile communication device 308, appropriate for the communication avenues allowed by the advertiser (e.g., text messaging, emailing, webpage, telephone call, etc.), and/or optimum for revenue generating potential for the marketplace advertisement platform 302. A plurality of banner size selection radio buttons and depictions 410 can change the rendering of a selected banner 412 in the image selection field 344 to make it appropriate for a particular type of mobile communication device 308. [0085] A range of actions, represented by their assigned icon, can be selected for incorporation, such as by drag and drop or by selecting. In some applications, those action icons are disabled (e.g., grayed out) if not appropriate for the particular CA 02714893 2010-07-30 WO 2009/099879 PCT/US2009/032385 advertisement, such as not having corresponding action information defined in general window 326, or if not available on the type of mobile communication device 308. Although not depicted, the selection can allow multiple actions to be added to the advertisement if supported by the mobile communication device 308. Alternatively or in addition, a hierarchy of preferred action choices when multiple choices are available can be specified with the first choice displayed. The action icon actually displayed on a particular mobile communication device 308 could be dynamically changed to accommodate a limitation on the user's contractual relationship or the local access network. For example, the user may not have paid for short message service or the service may not be available at a certain locale. [00861 Examples of action icons that are suggestive of function as well as giving a wide range of interaction possibilities for advertisements include, but are not limited to, the following: (1) A click-to-call icon 420 dials the number as specified by the advertiser to encourage calling; (2) A click-to-WAP (wireless application protocol) icon 422 launches a browser allowing the user to manually type in a link provided on the advertising banner 412; (3) A click-to-landing icon 424 allows the browser to return to a prior page or a home page, which can be desired due to the slow page loading for mobile communication device 308 using a limited throughput wireless channel; (4) Click-to-brochure icon 426 renders a document depiction for additional information about the advertisement; (5) A click-to-email icon 428 sends an automated email response to the advertiser; (6) Click-to-clip (keep/save) icon 430 saves the advertisement for later accessing; (7) A click-to-forward icon 432 launches a utility to forward the advertisement to an addressee manually entered or one in their address book; (8) A click-to-message icon 434 accesses a short message utility pre- addressed to the advertiser; (9) A click-to-content icon 436 navigates to a web link provided by the advertiser; (10) A click-to-locate icon 438 pops up a map to the advertiser, perhaps the closest location with reference to location information from the mobile communication device 308; (11) A click-to-promotion icon 440 can activate information about how to enter a sweepstakes, contest, promotion, etc; (12) A click-to-coupon icon 442 can access a barcode, alphanumeric password, etc. for entering into a full browser, a mail-in redemption, or to show to a retailer on the mobile communication device 308 in order to access a discount deal; and (13) A click-to-buy icon 444 initiates a purchase transaction. In some applications, the service provider for the mobile communication device 308 can enhance the transaction by providing the shipping and/or billing CA 02714893 2010-07-30 WO 2009/099879 PCT/US2009/032385 information for the user associated with the device 308, including adding the purchase to the service billing. [0087] In FIG. 7, an exemplary version of a communication system 500 is depicted according to some aspects as any type of computerized device, according to one aspect. For example, the communication device 500 may comprise a mobile wireless and/or cellular telephone. Alternatively, the communication device 500 may comprise a fixed communication device, such as a Proxy Call/Session Control Function (P-CSCF) server, a network device, a server, a computer workstation, etc. It should be understood that communication device 500 is not limited to such a described or illustrated devices, but may further include a Personal Digital Assistant (PDA), a two-way text pager, a portable computer having a wired or wireless communication portal, and any type of computer platform having a wired and/or wireless communications portal. Further, the communication device 500 can be a remote-slave or other similar device, such as remote sensors, remote servers, diagnostic tools, data relays, and the like, which does not have an end-user thereof, but which simply communicates data across a wireless or wired network. In alternate aspects, the communication device 500 may be a wired communication device, such as a landline telephone, personal computer, set-top box or the like. Additionally, it should be noted that any combination of any number of communication devices 500 of a single type or a plurality of the afore- mentioned types may be utilized in a cellular communication system (not shown). Therefore, the present apparatus and methods can accordingly be performed on any form of wired or wireless device or computer module, including a wired or wireless communication portal, including without limitation, wireless modems, Personal Computer Memory Card International Association (PCMCIA) cards, access terminals, personal computers, telephones, or any combination or sub-combination thereof. [0088] Additionally, the communication device 500 may include a user interface 502 for purposes such as viewing and interacting with advertisements. This user interface 502 includes an input device 504 operable to generate or receive a user input into the communication device 500, and an output device 506 operable to generate and/or present information for consumption by the user of the communication device 500. For example, input device 502 may include at least one device such as a keypad and/or keyboard, a mouse, a touch-screen display, a microphone in association with a voice recognition module, etc. Further, for example, output device 506 may include a display, an audio speaker, a haptic feedback mechanism, etc. Output device 506 may CA 02714893 2010-07-30 WO 2009/099879 PCT/US2009/032385 generate a graphical user interface, a sound, a feeling such as a vibration or a Braille text producing surface, etc. [0089] Further, communication device 500 may include a computer platform 508 operable to execute applications to provide functionality to the device 500, and which may further interact with input device 504 and output device 506. Computer platform 508 may include a memory, which may comprise volatile and nonvolatile memory portions, such as read-only and/or random-access memory (RAM and ROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read- only memory (EEPROM), flash memory, and/or any memory common to computer platforms. Further, memory may include active memory and storage memory, including an electronic file system and any secondary and/or tertiary storage device, such as magnetic media, optical media, tape, soft and/or hard disk, and removable memory components. In the illustrative version, memory is depicted as RAM memory 509 and a nonvolatile local storage component 510, both connected to a data bus 512 of the computer platform 508. [0090] Further, computer platform 508 may also include a processor 514, which may be an application-specific integrated circuit (ASIC), or other chipset, processor, logic circuit, or other data processing device. In some aspects, such as when communication device 500 comprises a cellular telephone, processor or other logic such as an application specific integration circuit (ASIC) 516 may execute an application programming interface (API) 518 that interfaces with any resident software components, depicted as applications (e.g., games) 520 that may be active in memory 509 for other functions (e.g., communication call control, alarm clock, text messaging, etc.). It should be appreciated with the benefit of the present disclosure that applications consistent with aspects of the present disclosure may omit other applications and/or omit the ability to receive streaming content such as voice call, data call, and media- related applications in memory 509. Device APIs 518 may run on top of a runtime environment executing on the respective communication device. One such API 518 runtime environment is Binary Runtime Environment for Wireless (BREW ) API 522, developed by QUALCOMM Incorporated of San Diego, California. [0091] Additionally, processor 514 may include various processing subsystems 524 embodied in hardware, firmware, software, and combinations thereof, that enable the functionality of communication device 500 and the operability of the communication device 500 on communications system 300 (FIG. 5). For example, processing CA 02714893 2010-07-30 WO 2009/099879 PCT/US2009/032385 subsystems 524 allow for initiating and maintaining communications, and exchanging data, with other networked devices as well as within and/or among components of communication device 500. In one aspect, such as in a cellular telephone, processor 514 may include one or a combination of processing subsystems 524, such as: sound, non-volatile memory, file system, transmit, receive, searcher, layer 1, layer 2, layer 3, main control, remote procedure, handset, power management, diagnostic, digital signal processor, vocoder, messaging, call manager, Bluetooth system, Bluetooth LPOS, position determination, position engine, user interface, sleep, data services, security, authentication, USIM/SIM (universal subscriber identity module/subscriber identity module), voice services, graphics, USB (universal serial bus), multimedia such as MPEG (Moving Picture Experts Group) protocol multimedia, GPRS (General Packet Radio Service), short message service (SMS), short voice service (SVSTM), web browser, etc. For the disclosed aspects, processing subsystems 524 of processor 514 may include any subsystem components that interact with applications executing on computer platform 508. [0092] Computer platform 508 may further include a communications module 526 that enables communications among the various components of communication device 500, as well as being operable to provide communications related to receiving and tracking advertisements presented on and/or interacted with on the user interface 502. Communications module 526 may be embodied in hardware, firmware, software, and/or combinations thereof, and may further include all protocols for use in intra- device and inter-device communications. A GPS engine 528 or other location sensing components provide location information of the communication device 500. [0093] Certain of these capabilities of the communication device 500 can be facilitated by code loaded from local storage 510, retained in memory 509, and executed by the processor 514, such as an operating system (OS) 530. A user interface (UI) module 532 facilitates interactive control with the user interface 502. The UI module 532 includes an advertising interaction component 534 that provides tailored interaction options for particular advertisements that are drawn from an advertisement cache 536 in an order specified by an advertisement queue 538 ordered by an advertising client 540, in particular an advertising packaging Triglet service adaptor 542. The usage of advertisements is captured by an advertising tracking component 544. A location reporting component 546 can include logic that selectively reports device location. CA 02714893 2010-07-30 WO 2009/099879 PCT/US2009/032385 [0094] In one aspect, the UI module 532 can include a keyword monitor 547 that monitors all user inputs in order to capture keywords or data from which keywords can be inferred. Thereby, no matter what application or communication function is being utilized, this user behavior associated with keywords can be captured. [0095] In one aspect, the BREW APIs 522 provide the ability for applications to call Device APIs 518 and other functions without having to be written specifically for the type of communication device 500. Thus, an application 520 or components for end-to- end mobile advertising on the communication device 500 may operate identically, or with slight modifications, on a number of different types of hardware configurations within the operating environment provided by BREW API 522, which abstracts certain hardware aspects. A BREW extension 548 adds additional capability to the programming platform of the BREW API 522, such as offering MP3 players, Java Virtual Machines, etc. As an example, the UI module 532 can be a BREW extension 548. [0096] In order to distribute computational overhead and/or to reduce transmission overhead on the communication system 300 (FIG. 6), an artificial intelligence (AI) component 550 and/or a rule-based logic component 552 can infer user behavior for reporting, make decisions as to when a reportable advertising-related event has occurred, and/or extrapolate location based on intermittent location sensing, etc. [0097] The rules-based logic component 552 can be employed to automate certain functions described or suggested herein. In accordance with this alternate aspect, an implementation scheme (e.g., rule) can be applied to define types of attributes that should be acted upon or ignored, correlate language elements to attributes, create rules that are aware of location sensing status, sensing a delay in last user interaction to determine if advertisement viewing is occurring, etc. By way of example, it will be appreciated that the rule-based implementation can automatically define criteria for types of user interactions that can be partially intruded upon by an advertisement. For example, during loading of a game, an advertisement can be allowed to be displayed full screen. When a half-screen application is running, example a text messaging application, then an advertisement banner can be displayed, which a user can selectively enable in order to receive subsidized service rates, for example. The rule- based logic component 552 could request impression advertising over click to action advertising in response to an inference made that the user does not directly interact with advertisement. In response thereto, the rule-based implementation can change the CA 02714893 2010-07-30 WO 2009/099879 PCT/US2009/032385 amount of notifications given, the level of detail provided, and/or prevent edits altogether that would result in a reset. [0098] The Al component 550 can facilitate automating performance of one or more features described herein such as predicting user behavior, extrapolating intermittent location data, adjusting advertisement interaction options based on machine learning. Thus, employing various Al-based schemes can assist in carrying out various aspects thereof. For instance, the Al component 550 could be trained in a learning mode wherein the user's location is analyzed against a database of locations in order to create the behavioral profile. Then, certain patterns of user behavior can be classified. [0099] A classifier is a function that maps an input attribute vector, x = (x I, x2, x3, x4, xn), to a class label class(x). A classifier can also output a confidence that the input belongs to a class, that is, f(x) = confidence(class(x)). Such classification can employ a probabilistic and/or statistical-based analysis (e.g., factoring into the analysis utilities and costs) to predict or infer an action that a user desires to be automatically performed. [00100] A support vector machine (SVM) is an example of a classifier that can be employed. The SVM operates by finding a hypersurface in the space of possible inputs that splits in an optimal way the triggering input events from the non- triggering events. Other classification approaches, including Naive Bayes, Bayesian networks, decision trees, neural networks, fuzzy logic models, maximum entropy models, etc., can be employed. Classification as used herein also is inclusive of statistical regression that is utilized to develop models of priority. [00101] As will be readily appreciated from the subject specification, the subject disclosure can employ classifiers that are pre-trained (e.g., via a generic training data from multiple users) as well as methods of reinforcement learning (e.g., via observing user behavior, observing trends, receiving extrinsic information). Thus, the subject disclosure can be used to automatically learn and perform a number of functions, including but not limited to determining, according to a predetermined criteria, what constitutes a reset condition of concern, when/if to communicate impending controller reset, when/if to prevent a controller reset, preferences for types of data to exchange, etc. [00102] In FIG. 8, a methodology 600 for mobile communication device advertising largely performed by the communication system of FIG. 5 begins in block 602 with an advertising administrator preparing an advertisement for deployment on mobile communication devices, according to one aspect. A mobile communication device CA 02714893 2010-07-30 WO 2009/099879 PCT/US2009/032385 client requests new advertisements, such as banner advertisements, from the marketplace platform (e.g., uiOne Delivery System (UDS), in block 604. In block 606, the advertising packaging Triglet Service Adapter (TSA) of UDS requests multiple advertisements (e.g., images, metadata, etc.). In block 608, with the advertisements now received by the mobile communication device, the user interface displays a banner advertisement. In block 610, the advertisement provides one or more methods for a user to interact or respond to the advertisement. For instance, a wireless application protocol (WAP) browser can be activated by a "click to glance" operation in block 612. As another example, a "click to call" can be automatically invoked or a manually dialed called correlated to a telephone number displayed on the advertisement, depicted at 614 as "call dialer." As yet another example, the user interface can provide a coupon clipping function, depicted at block 616. In response to this interaction, the mobile communication device launches the advertisement action as requested in block 618. This interaction is then tracked for reporting advertisement usage in block 620. [00103] In FIG. 9, a methodology 700 for end-to-end mobile advertising includes features enabled by location sensing of the mobile communication devices. In block 702, demographic profiling is collected and maintained, although the weight given to such inputs can be limited, in accordance with one implementation. In block 703, location-based behavioral profiling is performed, based upon location reports from mobile communication devices that can infer behavioral preferences of a user of the device. This process is discussed below with regard to FIG. 10. [00104] In block 704, a methodology for selecting and valuing advertising icon actions leverages the increased communication options can be available in the mobile communication device and/or with the advertiser, which is discussed in greater detail below with regard to FIG. 14. [00105] In block 705, behavioral profiling of the user is enhanced by capturing keywords entered into a WAP browser and other interactions with the mobile communication device 308. In order to encompass a broader scope of interaction, a utility can monitor the user interface directly to capture keystrokes, perhaps correlated with what is being displayed. Alternatively or in addition, the keyword characterization can occur upstream in the communication system, especially for limited capability mobile communication devices 308. [00106] In block 706, micro-targeted advertisement process is performed, as discussed above for FIG. 8, in support of location-disabled mobile communication CA 02714893 2010-07-30 WO 2009/099879 PCT/US2009/032385 devices. Another aspect is in block 710 discussed below with regard to FIG. 11, provides for reach-frequency-time advertising. An additional aspect is in block 712 that leverages the location and metric tagging capabilities to perform an interceptor advertisement campaign, discussed below with regard to FIG. 12. Yet a further aspect is in block 714 that leverages the metric tagging capabilities in order to provide timed couponing advertisements, discussed below with regard to FIG. 13. [00107] Critical mass billboard advertising methodology (block 716) can be performed in instances in which location information for a mobile communication device are used in conjunction with a dynamic public advertising display, as discussed below with regard to FIG. 15. Also, a consumer-to-consumer advertising can be performed (block 718) for trusted entities that wish to perform user targeted advertising. [00108] In block 720, advertising tracking can comprise in whole or in part tracking of user interaction with the advertisement. In one aspect, user interaction can comprise a click to action (block 722), which can cause a click to navigate to a web page of the advertiser. Click to action can also invoke a request to receive a call from the advertiser or to caller the advertiser. Click to action can also invoke SMS or other communication channels. In another aspect, user interaction can be click to clip (block 724) that allows a user to clip advertisements for later viewing. For example, clipping an advertisement in the middle of game play avoids disrupting the user experience. Promotional content can be saved for repeated viewing, such as viral videos that provide entertainment or informational value to the user while serving as impression or brand advertising for the advertiser. As a further aspect, the user interaction can be click to locate in block 726. For example, activating the advertisement can launch navigation information to the location of the advertiser. Click to locate can comprise being sensed as entering the location of the advertiser, which is deemed as a successful impression advertisement. Click to locate can comprise a user taking his advertisement display to the advertiser as an electronic discount coupon, which can be manually or automatically correlated with the advertisement for tracking of success. In yet another aspect, the user interaction can comprise click to glance (block 728), wherein an application is launched in another window of the user interface of the mobile communication device. In block 730, the user responses associated with the advertisement can be a source for tracking and updating user behavioral profile. [00109] In FIG. 10, a methodology 800 for performing location-informed behavioral can comprise maintaining a location database of advertisers and competitors in block CA 02714893 2010-07-30 WO 2009/099879 PCT/US2009/032385 802, in accordance with one implementation. Such location correlation can include prospective advertisers that can be approached about end-to-end mobile advertising. In block 804, locations of mobile subscribers are monitored. When a subscriber is determined to be in a monitored location in block 806, then a presumed transaction behavior is stored in block 808. A pattern can be correlated from one or more such presumed transaction behavior instances in order to enhance a behavioral profile of the user in block 810. [00110] In FIG. 11, a methodology 900 for reach-frequency-time advertising begins in block 902 with forecasting a behavioral/demographic population of mobile communication devices that can benefit from a particular advertisement for goods or services, according to one aspect. A micro-targeted advertisement is sent to this forecasted population in block 904. In block 905, the various uses of the user interface (UI) are monitored, such as use of the calling screen, a text messaging screen, a webpage browsing screen, a game screen, personal organizer screen (e.g., calculator, calendar, contact list, notepad, etc.). Depending on the available screen size, etc., advertising space can be available, either during use or when loading and/or exiting a screen. At the device, an opportunity is recognized for presenting an advertisement on the user interface (UI) in block 906. For example, the device UI is activated as a user selects menu options, etc., such that the UI is active and viewing of the advertisement can be presumed. [00111] In block 908, an advertisement is selected from those advertisements cached on the device. If the next advertisement queued for presentation is determined to have expired in block 910, then the next advertisement in the queue is selected in block 912. In block 914, with an unexpired advertisement accessed, the advertisement is presented (e.g., displayed) on the UI. The usage tracking for this advertisement is updated with an incremented frequency count in block 916 and cumulated duration of displayed is monitored in block 918. If a user has not caused an action that would leave the advertisement banner in block 920, then a further determination is made in block 922 as to whether a time target has been reached, either for this particular frequency count or a total duration of display on this mobile communication device. If not, processing returns to block 918. If the time limit is reached in block 922, the advertisement is replaced in the queue in 924 with the next advertisement and processing returns to block 906. If in block 920 the user has taken an action that warrants leaving the advertisement banner, then a further determination is made in block 926 as to whether a frequency CA 02714893 2010-07-30 WO 2009/099879 PCT/US2009/032385 count target has been reached. If not, the advertisement is returned or maintained in the queue to be repeated after a suitable interval in block 928 and processing returns to block 906. If the frequency count target has been reach in block 926, then the advertisement is replaced in the queue in block 924 and processing returns to block 906. [00112] The frequency and duration can be prescribed to be associated with a certain use of the wireless device. An advertiser may want a game advertisement to only run on users who use their wireless device for gaming. As another example, use as a telephone can omit advertisements as the user is paying a carrier for this service. By contrast, a discounted or demonstration version of a game can be accepted along with advertisements that warrant the subsidized cost. However, in the illustrative aspect all uses of the user interface (UI) conducive to advertising can be used as opportunities to display advertisements. The calculation of frequency and duration counts each presentation. Thus, cross content advertising includes when an advertising campaign multiple types of wireless device uses. As an illustrative example, consider a wireless device user Joey, who is a 14-year-old male skateboard fan, as determined by his behavioral and demographic profiles. A sports shoe advertiser directs that subscribers should view a shoe ad four times for a total of 30 seconds on their handset. Joey views the shoe ad as part of playing a skateboarding game, and then goes on to the Financial News Network webpage to receive stock quotes, and receives the same ad campaign from the shoe advertise, which counts as the second viewing of the ad and part of the 30 second duration. Whatever content Joey views, including his uiOne Homescreen, Joey sees the shoe ad until the metrics are satisfied. [00113] In FIG. 12, a methodology 940 for interceptor micro-targeting advertisement begins by utilizing a location-informed behavioral profile in order to predict a transaction in block 942, according to one aspect. An advertisement is requested or located in the advertisement cache as an interceptor advertisement opportunity when the predicted transaction is at a competitor business. The advertisement billing rate can be increased, for example, if the advertiser chooses to send advertisements to those going to competitors. Revenue optimizing advertising auctioning can thus increase the priority of such opportunities. [00114] In some aspects, the advertiser chooses to target a specific window of opportunity when the user may be the most susceptible to changing behavior if presented with an advertisement. Thus, in block 946, the location of the mobile subscriber and the time/date are monitored in order to comply with the presentation CA 02714893 2010-07-30 WO 2009/099879 PCT/US2009/032385 criteria specified by the advertisement campaign. For example, a user may tend to go to a competitor restaurant for lunch on Fridays at noon. The advertiser may choose to present an advertisement to such users at 11:30 and/or when the user is within three minutes travel based on current average speed to the advertiser's business and/or when the user is within half a mile of the competitor's location. In block 948, a determination is made as to whether the time/proximity metrics have been triggered. If so, the interceptor advertisement is presented in block 950. Although not depicted, the user can interact with the advertisement in a way that could be deemed a success of the advertisement. In the instance of impression advertisement as depicted in block 952, the location of the mobile subscriber is monitored. If a competitor location is entered in block 954, then in block 956 the advertisement is tracked as having failed in this instance. If not a competitor location in block 954, then a determination is made as to whether the interceptor advertiser location has been entered in block 958. If so, then the advertisement can be tracked as having succeeded in block 960. If not the competitor or interceptor location within any reasonable period of time, then the advertisement can be tracked as having had an inconclusive effect in block 962. [00115] In FIG. 13, a methodology 970 for a time couponing on mobile communication devices takes advantage of time tagged metrics (e.g., begin time, target time, and/or end time) associated with advertisements in and advertising repository in block 972, according to one aspect. An advertisement cache in the mobile device is refreshed with timed coupon advertisements in block 974. The advertisement queue is optimized so that timed coupon advertisements are scheduled for presentation within the schedule metric in block 976. Then a determination is made in block 978 that an advertisement is needed for the user interface. If so, then a further determination is made in block 980 to confirm that any begin time metric has been met. If not, the next advertisement in the queue is selected and processing returns to block 980. If the begin time has been met in block 980, then a further determination is made in block 984 as to whether the end time has been exceeded. If so, the advertisement is deleted from the queue in block 986 and the next advertisement in the queue is selected in block 982. If the advertisement end time has not been exceeded in block 984, then the advertisement is displayed on the UI in block 988. [00116] In FIG. 14, a methodology 1200 for selecting advertising icon actions suitable for a mobile communication device begins by defining an advertising icon suggestive and operable for all the possible actions which might include, but not limited CA 02714893 2010-07-30 WO 2009/099879 PCT/US2009/032385 to, click-to-call, click-to-brochure, click-to-clip, click-to-message, click- to-locate, click- to-WAP, click-to-email, click-to-forward, click-to-promotion, click-to-coupon, click-to- buy, and click-to-landing (block 1202), according to one aspect. The client device configuration is accessed to determine limitations on types of workflows (e.g., communication channels) available, limitations on input and output of the user interface, etc. (block 1204). A subset of advertising actions and icons is presented that are appropriate for the type of device. The list can also indicate which advertising icons have been supplied sufficient information regarding the advertiser to activate (e.g., email address, telephone number, website, uniform resource locator (URL) for brochure, etc.) (block 1206). In particular, in an illustrative implementation the list contains a set of actions, each action contains an icon or an icon reference and a workflow command and parameters (e.g., a BREW URI on a BREW platform). A selection process, either automatic or with user prompts, can guide placement and configuration of advertising action icons for inclusion. Selection can be influenced by the relative value to the advertiser of the different types of activation, incorporating a hierarchy for suggestion or rendering (block 1208). [00117] In FIG. 15, a methodology 1300 for critical mass billboard advertising includes tracking the location of a population of mobile communication devices (block 1302), in accordance to one implementation. A determination is made of client devices sensed to be within proximity of a dynamic public advertisement display (block 1304). Demographic and/or behavior profile of users of the proximate client devices are accessed in order to select appropriate advertisements (block 1306). Based on this population data, appropriate advertisement bids are accessed (block 1308). Revenues are optimized by selecting an advertisement that generates the highest bid based upon the sensed population (block 1310). [00118] In FIG. 16, a methodology 1400 for consumer-to-consumer advertising leverages the advertising distribution capabilities of the marketplace platform. User permission is verified for a particular trusted entity (e.g., individual, fraternal association) (block 1402), according to one implementation. The time constraints are defined for the advertisement purchase (e.g., holiday, birthday, proximity to a meeting event, etc.) (block 1404). Interactive options are incorporated into the advertisement (block 1406). User behavior is monitored for an opportunity within the time window for presenting the advertisement (block 1408). The advertisement is presented on the user interface of the mobile communication device (block 1410). CA 02714893 2010-07-30 WO 2009/099879 PCT/US2009/032385 [00119] In FIG. 17, an exemplary network distribution device 1700 has at least one processor 1702 for executing modules in computer-readable storage medium (memory) 1704 for distributing advertisement content to a mobile communication device. The network distribution device 1700 can comprise the marketplace platform 12, 106, 302 (FIGS. 1-5) or perform a portion of functions thereof. In the illustrative modules depicted, a first module 1706 provides means for identifying a plurality of advertisement actions, each action associated with a communication function of a mobile communication device and each action associated respectively with one of a plurality of icons. A second module 1708 provides means for selecting an advertisement action from the plurality of advertisement actions based upon availability of the associated communication function for the mobile communication device and accessibility of an advertiser target by the associated communication function. A third module 1710 provides means for sending an advertisement associated with the advertisement icon to the mobile communication device for presentation. [00120] In FIG. 18, an exemplary mobile communication device 1800 has at least one processor 1802 for executing modules in a computer-readable storage medium (memory) 1804 for presenting advertisement. In the illustrative modules depicted, a first module 1806 provides means for incorporating a plurality of advertisement actions, each action associated with a communication function of a mobile communication device and each action associated respectively with one of a plurality of icons. A second module 1808 provides means for receiving a selection for an advertisement action from the plurality of advertisement actions based upon availability of the associated communication function for the mobile communication device and accessibility of an advertiser target by the associated communication function. A third module 1810 provides means for receiving an advertisement associated with the advertisement icon at the mobile communication device for presentation. A fourth module 1812 provides means for implementing the selected advertisement action in response to an input by a user via a user interface of the mobile communication device interacting with the advertisement. [00121] It should be appreciated that aspects described herein segregate certain functions for network-level storage and processing and other functions for performance by a mobile communication device. It should be appreciated with the benefit of the present disclosure that applications consistent with aspects can include configurations with more distributed processing to reduce computational overhead at a centralized CA 02714893 2010-07-30 WO 2009/099879 PCT/US2009/032385 location and/or reduce communication loads. Alternatively, some limited capability mobile devices can be served with mobile advertising with additional processing centralized. [00122] The various illustrative logics, logical blocks, modules, and circuits described in connection with the versions disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but, in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Additionally, at least one processor may comprise one or more modules operable to perform one or more of the steps and/or actions described above. [00123] Further, the steps and/or actions of a method or algorithm described in connection with the aspects disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, a hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium may be coupled to the processor, such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. Further, in some aspects, the processor and the storage medium may reside in an ASIC. Additionally, the ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal. Additionally, in some aspects, the steps and/or actions of a method or algorithm may reside as one or any combination or set of codes and/or instructions on a machine readable medium and/or computer readable medium, which may be incorporated into a computer program product. [00124] While the foregoing disclosure discusses illustrative aspects and/or implementations, it should be noted that various changes and modifications could be CA 02714893 2010-07-30 WO 2009/099879 PCT/US2009/032385 made herein without departing from the scope of the described aspects and/or implementations as defined by the appended claims. Furthermore, although elements of the described aspects and/or implementations may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated. Additionally, all or a portion of any aspect and/or implementation may be utilized with all or a portion of any other aspect and/or implementation, unless stated otherwise. |
Embodiments of apparatuses and methods for reducing the uncorrectable error rate in a lockstepped dual-modular redundancy system are disclosed. In one embodiment, an apparatus includes two processor cores, a micro-checker, a global checker, and fault logic. The micro-checker is to detect whether a value from a structure in one core matches a value from the corresponding structure in the other core. The global checker is to detect lockstep failures between the two cores. The fault logic is to cause the two cores to be resynchronized if there is a lockstep error but the micro-checker has detected a mismatch. |
1.A device comprising:The first core including the first structure;a second core comprising a second structure;a micro-verifier for detecting whether a first value from the first structure matches a second value from the second structure;a global checker for detecting a clock synchronization failure between the first core and the second core;Fault logic for causing the first checker to detect the clock synchronization failure and the micro-verifier detects a mismatch between the first value and the second value A kernel and the second core are resynchronized.2.The apparatus of claim 1 wherein said micro-verifier comprises a comparator for comparing said first value to said second value.3.The apparatus of claim 1 wherein said global checker includes a comparator for comparing a first output of said first core with a second output of said second core.4.The apparatus of claim 1, wherein the fault logic is further for if the global checker detects the clock synchronization fault and the micro checker detects the first value and the second A match of the values indicates that an uncorrectable error was detected.5.The device of claim 1 wherein:The first core further includes a third structure and a fourth structure;The second core further includes a fifth structure and a sixth structure;The first structure includes first fingerprint logic to generate the first value based on a third value from the third structure and a fourth value from the fourth structure;The second structure includes second fingerprint logic to generate the second value based on a fifth value from the fifth structure and a sixth value from the sixth structure.6.The device of claim 1 wherein:The architectural state of the first core is independent of the first value; andThe architectural state of the second core is independent of the second value.7.The device of claim 6 wherein:The first structure is a first predicted structure;The second structure is a second predictive structure.8.The apparatus of claim 1, wherein the fault logic is further for causing the first checker to detect the clock synchronization failure and the micro checker detects the mismatch A value and the second value are regenerated.9.The device of claim 8 wherein:The first structure is a first cache;The first result is a first cache entry;The second structure is a second cache;The second result is a second cache entry.10.The apparatus of claim 9, wherein the fault logic is further for causing the first checker to detect the clock synchronization failure and the micro-verifier detects the mismatch A cache entry and the second cache entry are reloaded.11.A method comprising:Verifying whether the first value from the first structure in the first core matches the second value from the second structure in the second core;Detecting a clock synchronization failure between the first core and the second core;The first core and the second core are resynchronized if a mismatch between the first value and the second value is detected.12.The method of claim 11 further comprising indicating that an uncorrectable error is detected if said first value matches said second value.13.The method of claim 12 further comprising:Generating the first value based on a third value from a third structure in the first core and a fourth value from a fourth structure in the first kernel;The second value is generated based on a fifth value from a fifth structure in the second kernel and a sixth value from a sixth structure in the second kernel.14.The method of claim 13 wherein:Generating the first value includes generating a checksum based on the third value and the fourth value;Generating the second value includes generating a checksum based on the fifth value and the sixth value.15.The method of claim 11 further comprising:Predicting whether the first instruction is to be executed by the first kernel based on the first value; Predicting whether the second instruction will be executed by the second core based on the second value.16.The method of claim 11 further comprising regenerating said first value and said second value if said mismatch is detected.17.The method of claim 16 further comprising:Comparing the first value to the first value that is regenerated;Comparing the second value to the second value that is regenerated;Synchronizing the first kernel with the second core if the second value matches a second value that is regenerated;If the first value matches the regenerated first value, the second kernel is synchronized with the first core.18.The method of claim 16 wherein said first structure is a first cache, said first value is a first cache entry, said second structure is a second cache, and said second value Is a second cache entry, wherein regenerating the first value and the second value comprises reloading the first cache entry and the second cache entry.19.A system comprising:Dynamic random access memoryThe first core including the first structure;a second core comprising a second structure;a micro-verifier for detecting whether a first value from the first structure matches a second value from the second structure;a global checker for detecting a clock synchronization failure between the first core and the second core;Fault logic for causing the first checker to detect the clock synchronization failure and the micro-verifier detects a mismatch between the first value and the second value A kernel is resynchronized with the second core. |
Reduce the uncorrectable error rate in a clock synchronous dual-mode redundant systemTechnical fieldThe present invention relates to the field of data processing, and more particularly to the field of suppressing errors in data processing devices.Background techniqueAs improvements in integrated circuit manufacturing technology continue to make microprocessors and other data processing devices smaller in size and lower operating voltages, manufacturers and users of these devices are becoming more concerned with soft errors. Soft errors occur when alpha particles and high energy neutrons collide with an integrated circuit and change the charge stored on the circuit nodes. If the charge change is large enough, the voltage on the node may change from a level representing a logic state to a level representing a different logic state, in which case the information stored on the node is corrupted. In general, the soft error rate increases as the circuit size decreases because the likelihood of collision particles hitting the voltage node increases as the circuit density increases. Also, as the operating voltage decreases, the difference between the voltage levels representing different logic states decreases, so changing the logic state on the circuit node requires less energy and produces more soft errors.Blocking particles that cause soft errors is extremely difficult, so data processing devices often include techniques for detecting and sometimes for correcting soft errors. These error suppression techniques include dual mode redundancy ("DMR") and triple mode redundancy ("TMR"). With DMR, two identical processors or processor cores execute the same program in a clock-synchronous manner and compare their results. With TMR, three identical processors are run in clock synchronization.Using DMR or TMR can detect errors in any one processor because errors will cause differences in results. TMR provides the advantage of being able to assume that the matching results of two of the three processors are correct. The result is to achieve recovery from the error.Recovery in the DMR system is also feasible by verifying all results before committing all results to the registers or all results are allowed to affect the architectural state of the system. Then, if an error is detected, recovery can be achieved by re-executing all instructions since the last checkpoint. However, this approach is not practical due to latency or other design constraints. Another method is to join the fallback mechanism, which allows the original architectural state to be restored if an error is detected. Due to the complexity of the design, this approach is not practical, and there may be problems with re-executing from previous states due to non-deterministic events, such as asynchronous interrupts or re-execution of unequal-powered output operations. May be different from the original result.In addition, because DMR and TMR implementations require additional circuitry that can be affected by soft errors, and because they can detect errors that would otherwise become undetected without causing system failure, DMR and TMR may actually Increase the error rate. For example, errors in the structure used to make predictions may result in incorrect predictions, where the structure is used to predict which branch of the program should be speculatively executed, but when the branch condition is finally evaluated, the processor will automatically resume.DRAWINGSThe invention is illustrated by way of example, but not limited to the accompanying drawings.Figure 1 illustrates an embodiment of the present invention in a multi-core processor;2 illustrates an embodiment of the present invention that uses microcheck fingerprint logic to reduce cross-core bandwidth;3 illustrates an embodiment of the present invention in a method for reducing an uncorrectable error rate in a clock synchronous dual mode redundant system;4 illustrates another embodiment of the present invention in a method for reducing uncorrectable error rates in a clock synchronous dual mode redundant system;Figure 5 illustrates another embodiment of the present invention in a method for reducing uncorrectable error rates in a clock synchronous dual mode redundant system;Figure 6 illustrates an embodiment of the present invention in a clock synchronous dual mode redundancy system.Detailed waysEmbodiments of an apparatus and method for reducing uncorrectable error rates in a clock synchronous dual mode redundant system are described below. In the following description, numerous specific details are set forth, such as components and system structures, in order to provide a more complete understanding of the invention. However, it will be understood by those skilled in the art that the present invention may be practiced without the specific details. In addition, some well-known structures, circuits, techniques, and the like are not described in detail to avoid unnecessarily obscuring the present invention.DMR can be used to provide error detection and correction. However, by detecting an error that does not cause a system failure, the DMR may also increase the error rate. Embodiments of the present invention can reduce the error rate in a DMR system by detecting such "false" errors using a micro-verifier to ignore them. Other embodiments may reduce the error rate in a DMR system by using a micro-verifier for a particular structure, such as a cache, regenerating a value for it and comparing the value to the original value to determine the two processors Which one should be synchronized with the state of another processor, thereby avoiding the cost of a full fallback mechanism. These embodiments of the present invention are desirable, which provide some of the benefits of DMR (e.g., error detection and correction capabilities) while reducing some of the disadvantages (e.g., false errors, cost of full resiliency).Moreover, embodiments of the present invention are desirable that avoid the protection of a particular structure with a parity or error correction code mechanism that may be expensive and may not be necessary for structures that do not corrupt the state of the architecture. In accordance with an embodiment of the present invention, connecting these structures to the micro-verifier can provide the ability to recover from errors without requiring parity or other means to determine which of the two DMR cores has an error. .FIG. 1 illustrates an embodiment of the present invention in a multi-core processor 100. Typically, a multi-core processor is a single integrated circuit that includes more than one execution core. The execution kernel includes logic for executing instructions. In addition to executing the kernel, the multi-core processor can include any combination of dedicated or shared resources within the scope of the present invention. A dedicated resource may be a resource dedicated to a single core, such as a dedicated level one cache, or may be a resource dedicated to any subset of the kernel. A shared resource may be a resource shared by all cores, such as a shared external bus unit or a shared L2 cache that supports an interface between a multi-core processor and another component, or may be a resource shared by any subset of the kernel. The invention is also implemented in an apparatus other than a multi-core processor, such as in a multi-processor system having at least two processors, each of which has at least one core.Processor 100 includes a core 110 and a core 120. The cores 110 and 120 can be based on the design of any of a variety of different types of processors, such as the Pentium(R) processor family, the Itanium(R) processor family, or other processor families from Intel Corporation, or from Another processor of another company. The processor 100 also includes a global checker 130 and a micro checker 140.In accordance with any known technique for detecting clock synchronization faults in a DMR system, such as with a comparator circuit, global checker 130 compares the output from core 110 with the output from core 120. For example, when kernels 110 and 120 are synchronously running the same program copy with the same input, the outputs of cores 110 and 120 can be compared.Kernel 110 includes structure 111, which can be any circuit, logic, functional block, module, unit, or other structure that is generated or maintained with and from being included in kernel 120 when cores 110 and 120 operate in a clocked manner. The corresponding value of the corresponding structure 121 matches the value.In one embodiment, structures 111 and 121 may be structures that do not change the architectural state of processor 100 or the system including processor 100. For example, structures 111 and 121 can be prediction structures such as conditional branch predictors, jump predictors, return address predictors, or memory dependence predictors.In another embodiment, structures 111 and 121 may be structures whose content is replicated or otherwise regenerated elsewhere in the system including processor 100. For example, structures 111 and 121 can be cache structures in which each unmodified cache line or item is a value that can be regenerated by reloading a cache line or item from a higher level cache or other memory in the system. .Micro-verifier 140 compares the values from structure 111 with the corresponding values from structure 121. In various embodiments, the compared values may differ depending on the nature of structures 111 and 112, and may be, for example, a single bit indicating whether a conditional branch should be performed or a jump should occur, a multi-bit predicted return address, or multiple bits. Cache line or item. Thus, the nature of the micro-verifier 140 can vary in different embodiments and can be compared according to any well-known technique, such as with dedicated circuitry or gate circuitry or comparator circuitry.In one embodiment, the micro-verifier 140 can be configured to retain its comparison result at least until the program execution of the clock synchronization has reached a point at which the clock synchronization failure detected by the global checker 130 cannot It is attributed to a mismatch between the values compared by the micro-verifier 140. For example, if the micro-verifier is combinatorial logic and the compared values remain static at least until each clock synchronization fault detection point is reached, then the micro-verifier 140 can be implemented without any special storage elements. This configuration of the micro-verifier 140 can be implemented, or can be utilized with registers or other storage elements that store the results of the micro-verifier 140. In other embodiments, the micro-verifier is not required to be configured to retain its comparison results.Processor 100 also includes fault logic 150. Fault logic 150 may be any hardware, microcode, programmable logic, processor abstraction layer, firmware, software, or other logic that command processor 100 responds to global checker 130 detecting a clock synchronization fault. When the global checker 130 detects a clock synchronization failure, if the micro-verifier 140 has detected a mismatch between the value from the structure 111 and the corresponding value from the structure 121, the fault logic 150 causes the core 110 and the core 120 Resynchronized as described below. However, if the micro-verifier 140 does not detect a mismatch between the value from the structure 111 and the corresponding value from the structure 121, then according to any well-known method of indicating system failure, such as reporting a fault code and interrupting the operation, the fault logic 150 indicates that an uncorrectable error was detected.Although FIG. 1 only shows that structure 111 in core 110 and structure 121 in core 120 provide input to micro-verifier 140, any number of structures and micro-verifiers may be used within the scope of the present invention. For example, Figure 2 illustrates an embodiment of the invention using multiple structures per core, a single micro-verifier, and fingerprint logic for reducing inter-core bandwidth.In FIG. 2, processor 200 includes cores 210 and 220, a global checker 230, a micro-verifier 240, and fault logic 250. Core 210 includes structures 211, 213, and 215, and processor core 220 includes structures 221, 223, and 225.Structure 211 includes fingerprint logic 212 that generates fingerprints based on values from structures 213 and 215, where structures 213 and 215 can be any of the structures described above with respect to structure 111 in FIG. Likewise, structure 221 includes fingerprint logic 222 that generates fingerprints based on values from structures 223 and 225 according to the same method as fingerprint logic 212.Fingerprint logic 212 and fingerprint logic 222 may be implemented using any known method of combining two or more values into a single value (eg, using a cyclic redundancy checker to generate a checksum). Fingerprint logic 212 and fingerprint logic 222 may be used such that micro-verifier 240 may detect mismatches between structures 213 and 223 and structures 215 and 225, rather than using one micro-verifier for structures 213 and 223, for structure 215 And 225 use another micro-verifier.Fingerprint logic 212 and fingerprint logic 222 can also be used to reduce inter-core bandwidth. For example, fingerprint logic 212 can be used to combine values from structures 213 and 215 such that the number of bits in the output of fingerprint logic 212 is less than the total number of bits in the two values. Although in some embodiments it is desirable for fingerprint logic 212 to output a unique value for each input combination, in other embodiments it is desirable to accept in an exchange with reducing the number of bits of each input connected to micro-verifier 240. Less than 100% accuracy from the micro-verifier 240. Less than 100% accuracy of the micro-verifier 240 is acceptable because the micro-verifier 240 fails to detect that a correctable clock synchronization fault will be interpreted as an uncorrectable clock synchronization fault, rather than being explained The correct clock synchronization operation can cause system damage.3 illustrates an embodiment of the method 300 of the present invention for reducing an uncorrectable error rate in a clock synchronous dual mode redundant system including the processor 100 of FIG. 1, wherein the structures 111 and 121 are incapable of changing the architectural state. Structure, such as predictive structure.In block 310, cores 110 and 120 operate in a clocked manner. In block 311, structure 111 produces a first value and structure 121 produces a second value. The first value may match the second value or may not match the second value. In block 320, the micro-verifier 140 compares the values from structures 111 and 121. In block 330, the comparison result in block 320 is stored.In block 331, the kernel 110 executes the first instruction based on the value generated by the structure 111, and the kernel 120 executes the second instruction based on the value generated by the structure 121. The first and second instructions may be the same instruction or different instructions. The first and second values can be used as a basis for determining what instructions to execute by indicating conditional branch prediction, jump prediction, return address prediction, memory correlation prediction, or any other prediction or result that cannot change the state of the architecture.The method 300 proceeds directly from block 331 to block 340 or proceeds to block 340 after the kernels 110 and 120 have executed any number of additional instructions.At block 340, global checker 130 compares the outputs from cores 110 and 120. If the outputs match, then the clock synchronization operations of cores 110 and 120 continue in block 310, regardless of the results stored in block 330, without any error correction, recovery, or notification techniques. . However, if the global checker 140 detects a clock synchronization failure in block 340, the method 300 proceeds to block 350.If the result stored in block 330 indicates that the value from structure 111 matches the value from structure 121, then method 300 proceeds from block 350 to block 360. In block 360, fault logic 150 indicates that an uncorrectable error was detected, such as by reporting a fault code and interrupting the system.If the result stored in block 330 indicates a mismatch between the values from structures 111 and 121, then method 300 proceeds from block 350 to block 370. In block 370, fault logic 150 causes resynchronization of cores 110 and 120. This resynchronization can be accomplished by changing the architectural state of the kernel 110 to match the architectural state of the kernel 120, or vice versa. Method 300 then returns to block 310.4 illustrates an embodiment of the method 400 of the present invention for reducing an uncorrectable error rate in a clock synchronous dual mode redundancy system including the processor 100 of FIG. 1, wherein the structures 111 and 121 are in their system. Others that are copied or can be regenerated, such as a cache.In block 410, cores 110 and 120 operate in a clock synchronized manner. In block 411, the instructions that cause the unmodified cache line in structure 111 to be loaded are executed by kernel 110, and the instructions that cause the unmodified cache line in structure 121 to load generate a second value. The method 400 proceeds directly from block 411 to block 420, or proceeds to block 420 after the kernels 110 and 120 have executed any number of additional instructions.In block 420, the micro-verifier 140 compares the value from the structure 111 (e.g., the cache line loaded in block 411) with the value from the structure 121 (e.g., the high speed loaded in block 411). Cache line) compared. In block 430, the comparison result in block 420 is stored.The method 400 proceeds directly from block 430 to block 440 or proceeds to block 440 after the kernels 110 and 120 have executed any number of additional instructions.In block 440, global checker 130 compares the outputs from cores 110 and 120. If the outputs match, the clock synchronization operations of cores 110 and 120 continue in block 410, regardless of the results stored in block 430, without any error correction, recovery, or notification techniques. However, if the global checker 140 detects a clock synchronization failure in block 440, the method 400 proceeds to block 450.If the result stored in block 430 indicates that the value from structure 111 matches the value from structure 121, then method 400 proceeds from block 450 to block 460. In block 460, fault logic 150 indicates that an uncorrectable error was detected, such as by reporting a fault code and interrupting the system.If the result stored in block 430 indicates a mismatch between the values from results 111 and 121, then method 400 proceeds from block 450 to block 470. In blocks 470 through 473, fault logic 150 causes resynchronization of cores 110 and 120.In block 470, values from structures 111 and 121 are found elsewhere, or otherwise regenerated, by reloading the cache line loaded in block 411. a value that is regenerated (for example, if a single copy is obtained from where the value was copied in the system) or multiple values (eg, if the value is obtained for each structure from where the value was copied in the system) The copy) can be loaded into one or more registers, or one or more other locations that are provided for comparison with values from structures 111 and 121. Alternatively, values from structures 111 and 121 can be moved to registers, or other locations provided for comparison with the regenerated one or more values, such as by re-executing the instructions executed in block 411. Reproduce one or more values.In block 471, the regenerated one or more values are compared to the values from structures 111 and 121. If the regenerated value matches the value from structure 111, then in block 472, kernel 120 is synchronized with kernel 110, such as by changing the architectural state of kernel 120 to match the architectural state of kernel 110. If the regenerated value matches the value from structure 121, then in block 473, kernel 110 is matched to kernel 120, for example by changing the architectural state of kernel 110 to match the architectural state of kernel 120. The method 400 returns from block 472 and 473 to block 410.FIG. 5 illustrates an embodiment of the present invention in a method 500 for reducing an uncorrectable error rate in a clock synchronous dual mode redundant system including the processor 200 of FIG. 2.In block 510, cores 210 and 220 operate in a clocked manner. In block 511, structure 213 produces a value and structure 223 produces a value. The value from structure 213 may match the value from structure 223 or may not match the value from structure 223. In block 512, structure 215 produces a value and structure 225 produces a value. The value from structure 215 may match the value from structure 225 or may not match the value from structure 225.In block 513, structure 211 generates fingerprint values based on values from structures 213 and 215, and structure 221 generates fingerprint values based on values from structures 223 and 225. The fingerprint value can be generated according to any well-known technique for combining values (eg, using a cyclic redundancy checker to generate a checksum).In block 520, the micro-verifier 240 compares the fingerprint values from structures 211 and 221. In block 530, the comparison result in block 520 is stored.In block 540, global checker 230 compares the outputs from cores 210 and 220. If the outputs match, the clock synchronization operations of cores 210 and 220 continue in block 510, regardless of the results stored in block 530, without any error correction, recovery, or notification techniques. However, if the global checker 240 detects a clock synchronization failure in block 540, the method 500 proceeds to block 550.If the result stored in block 530 indicates that the fingerprint value from structure 211 matches the fingerprint value from structure 221, then method 500 proceeds from block 550 to block 560. In block 560, fault logic 250 indicates that an uncorrectable error was detected, such as by reporting a fault code and interrupting the system.If the result stored in block 530 indicates a mismatch between the values from structures 211 and 221, then method 500 proceeds from block 550 to block 570. In block 570, fault logic 250 causes resynchronization of cores 210 and 220. This resynchronization can be accomplished by changing the architectural state of kernel 210 to match the architectural state of kernel 220, or vice versa. Method 500 then returns to block 510.The methods illustrated in Figures 3, 4, and 5 may be performed in a different order within the scope of the present invention, the steps shown may be omitted, additional steps may be added, or combinations of records, combinations, omissions or additional steps may be combined. For example, if the output of the micro-verifier remains static until blocks 350, 450, or 550 are performed, respectively (checking the comparison results of the micro-verifier), block 330, 430, or 530 may be omitted (storage micro-verifier) Comparison results).Other examples of methods in which block 330 (storing the result of the comparison of the micro-verifier) may be omitted are embodiments of the invention in which the micro-verifier output need not be retained. In one such embodiment, the method may proceed from the micro-verifier comparison of block 320 to the decision based on the micro-verifier comparison block 350 (or blocks 320 and 350 may be combined). In this embodiment, if the micro-verifier detects a mismatch (in 320 or 350), the processor's existing branch mispredictive recovery mechanism can be used to remove the speculative state, thereby in block 370 The kernel is synchronized to a non-speculative state. If the micro-verifier does not detect a mismatch, the method of this embodiment can proceed to block 331 to execute the instruction based on the prediction and then proceed to block 340 for the global validator to detect the clock synchronization fault, and then If a clock synchronization failure is detected, proceed to block 360 to indicate an unrecoverable error.FIG. 6 illustrates an embodiment of the present invention in a clock synchronous dual mode redundancy system 600. System 600 includes a multi-core processor 610 and system memory 620. Processor 610 can be any of the processors described above with respect to Figures 1 and 2. System memory 620 can be any type of memory, such as semiconductor-based static or dynamic random access memory, semiconductor-based flash memory or read-only memory, or magnetic or optical disk storage. The processor 610 and system memory 620 can be coupled to each other in any arrangement, using any combination bus or direct or point-to-point connection, and by any other means. System 600 can also include any bus (e.g., peripheral bus) or components (e.g., input/output devices), not shown in FIG.In system 600, system memory 620 can be used to store values that can be loaded into structures such as structures 111, 121, 213, 215, 223, and 225 described above. Thus, system memory 620 can be a source of duplicated or regenerated values in accordance with an embodiment of the method of the present invention, for example, as shown in block 470 of FIG.The processor 100, the processor 200, or any other component or component designed in accordance with embodiments of the present invention may be designed at various stages from generation to simulation to manufacturing. Data representing a design can represent the design in a variety of ways. First, as used in the simulation, hardware can be represented using a hardware description language or another functional description language. Additionally or alternatively, a circuit level model with logic and/or transistor gates can be generated at certain stages of the design process. In addition, at some stage, most designs reach a level at which they can be modeled using data representing the physical layout of individual devices. Where conventional semiconductor fabrication techniques are used, the data representing the device layout model may be data specifying whether various features are present or not present on different mask layers for the mask for fabrication integrated circuit.In any representation of the design, the data can be stored in any form of machine readable medium. A modulated or otherwise generated light wave or wave, memory, or magnetic or optical storage medium, such as a disk, for transmitting such information may be a machine readable medium. Any of these media may "bear" or "instruct" the design, or other information used in embodiments of the invention, such as instructions in an error recovery routine. When an electrical carrier indicating or carrying information is transmitted, a new copy is performed to the extent that the copying, buffering, or retransmission of the electrical signal is performed. Thus, the behavior of a communication provider or network provider can be an act of replicating an object (eg, a carrier) embodying the techniques of the present invention.Accordingly, an apparatus and method for reducing uncorrectable error rates in a clock synchronous dual mode redundant system is disclosed. While the invention has been described with respect to the specific embodiments illustrated in the drawings The specific structures and arrangements shown and described are susceptible to various other modifications as will be apparent to those skilled in the art. In the field of technology, such as rapid development and inability to foresee greater progress, it is possible to easily modify the disclosure without departing from the scope of the present disclosure or the scope of the appended claims. The arrangement and details of the embodiments. |
A system and method of managing power may include determining a power state based on a first power management request from a first operating system executing on a mobile platform and a second power management request from a second operating system executing on the mobile platform. The first operating system and one or more components of the mobile platform can define a first virtual machine, and the second operating system and one or more components of the mobile platform can define a second virtual machine. In addition, the power state may be applied to the mobile platform. |
1.A method for power management of a mobile platform includes:The power state is determined based on the first power management request from the first operating system running on the mobile platform and the second power management request from the second operating system running on the mobile platform. One or more components of the mobile platform define a first virtual machine, and the second operating system and one or more components of the mobile platform define a second virtual machine;Determining the performance status based on the first performance management request from the first operating system and the second performance management request from the second operating system; andThe power state and the performance state are applied to the mobile platform, wherein the first power management request identifies a first power state and the second power management request identifies a second power state, and the power state is determined This includes selecting the shallowest power state from the first and second power states.2.The method of claim 1, further comprising reading the first and second power management requests from a set of registers dedicated to power management.3.The method of claim 2, wherein the set of registers includes at least one of a status register, a command register, and a control register.4.The method of claim 2, further comprising mapping the set of registers to a memory space.5.The method of claim 1, wherein applying the power state and the performance state to the mobile platform includes placing a hardware block of the mobile platform in a low power mode corresponding to the power state.6.The method according to claim 5, wherein the hardware block includes at least one of a wireless circuit block, a storage circuit block, and an imaging circuit block.7.A device for power management of a mobile platform, including:Virtual machine management logic for:The power state is determined based on the first power management request from the first operating system running on the mobile platform and the second power management request from the second operating system running on the mobile platform. One or more components of the mobile platform define a first virtual machine, and the second operating system and one or more components of the mobile platform define a second virtual machine;Determining the performance status based on the first performance management request from the first operating system and the second performance management request from the second operating system; andThe power state and the performance state are applied to the mobile platform, wherein the first power management request identifies a first power state and the second power management request identifies a second power state, and the virtual machine The management logic selects the shallowest power state from the first and second power states.8.7. The device of claim 7, further comprising a set of registers dedicated to power management, wherein the virtual machine management logic is to read the first and second power management requests from the set of registers.9.The apparatus of claim 8, wherein the set of registers includes at least one of a status register, a command register, and a control register.10.The device of claim 9, wherein the virtual machine management logic is to map the set of registers to a memory space.11.8. The device of claim 7, wherein the virtual machine management logic is to place the hardware blocks of the mobile platform in a low power mode corresponding to the power state.12.The device of claim 11, wherein the hardware block is to include at least one of a wireless circuit block, a storage circuit block, and an imaging circuit block.13.A device for power management of a mobile platform, including:A means for determining a power state based on a first power management request from a first operating system running on a mobile platform and a second power management request from a second operating system running on the mobile platform, wherein the first An operating system and one or more components of the mobile platform define a first virtual machine, and the second operating system and one or more components of the mobile platform define a second virtual machine;Means for determining the performance status based on the first performance management request from the first operating system and the second performance management request from the second operating system; andA component for applying the power state and the performance state to the mobile platform, wherein the first power management request identifies a first power state and the second power management request identifies a second power state, wherein The means for determining the power state includes means for selecting the shallowest power state from the first and second power states.14.The device of claim 13, further comprising means for reading the first and second power management requests from a set of registers dedicated to power management.15.The apparatus of claim 14, wherein the set of registers includes at least one of a status register, a command register, and a control register.16.The device of claim 14, further comprising means for mapping the set of registers to a memory space.17.The device of claim 13, wherein the means for applying the power state and the performance state to the mobile platform includes means for placing a hardware block of the mobile platform in a position corresponding to the power state Components in low-power mode.18.The apparatus according to claim 17, wherein the hardware block includes at least one of a wireless circuit block, a storage circuit block, and an imaging circuit block.19.A computer-readable storage medium that has stored instructions that, when executed, cause a mobile platform to execute the method according to any one of claims 1-6.20.A system for power management of a mobile platform, including:A mobile platform, running a first operating system and a second operating system, wherein one or more components of the first operating system and the mobile platform define a first virtual machine, and the second operating system and the mobile platform One or more components of the platform define a second virtual machine; andVirtual machine management logic for:Determining a power state based on a first power management request from the first operating system and a second power management request from the second operating system,Determining the performance status based on the first performance management request from the first operating system and the second performance management request from the second operating system, andApplying the power state and the performance state to the mobile platform,Wherein the first power management request identifies a first power state and the second power management request identifies a second power state, and the virtual machine management logic is to select the shallowest power from the first and second power states status.21.The system of claim 20, further comprising a set of registers dedicated to power management, wherein the virtual machine management logic is to read the first and second power management requests from the set of registers.22.The system of claim 21, wherein the set of registers includes at least one of a status register, a command register, and a control register.23.The system of claim 22, wherein the virtual machine management logic is to map the set of registers to a memory space.24.The system of claim 20, wherein the mobile platform includes a hardware block, and the virtual machine management logic is to place the hardware block in a low power mode corresponding to the power state.25.The system according to claim 24, wherein the hardware block includes at least one of a wireless circuit block, a storage circuit block, and an imaging circuit block. |
Method, device and system for power management of mobile platformTechnical fieldThe embodiments generally relate to virtualization technology (VT). More specifically, embodiments relate to fine grained power management in virtualized mobile platforms.Background techniqueAs the popularity of mobile phones increases, so too can the complexity and functionality of these devices. For example, a given mobile platform may no longer be limited to a single operating system (OS) and can support multiple usage models, such as web browsing and telecommunications. Although the virtualization of devices with multiple operating systems can make it easier for telephone vendors, software, and service providers to supply devices in a customized manner, there are still many challenges. For example, when considering a VM (Virtual Machine)-based device as a whole, related power management decisions within a single OS may no longer be relevant or accurate, because there may be conflicting power management requests/decisions from different operating systems.Summary of the inventionIn one aspect of the present application, a method for power management of a mobile platform is provided, which includes: based on a first power management request from a first operating system running on the mobile platform and a second operation from the mobile platform. The second power management request of the system determines the power state, wherein one or more components of the first operating system and the mobile platform define a first virtual machine, and one of the second operating system and the mobile platform Or more components define a second virtual machine; determine a performance state based on a first performance management request from the first operating system and a second performance management request from the second operating system; and combine the power state with The performance state is applied to the mobile platform, wherein the first power management request identifies a first power state and the second power management request identifies a second power state, and determining the power state includes receiving data from the first power state. And the second power state selects the shallowest power state.Another aspect of the present application provides a device for power management of a mobile platform, including virtual machine management logic, which is used for: based on a first power management request from a first operating system running on the mobile platform and The second power management request of the second operating system running on the mobile platform determines the power state, wherein the first operating system and one or more components of the mobile platform define a first virtual machine, and the second One or more components of the operating system and the mobile platform define a second virtual machine; determine based on a first performance management request from the first operating system and a second performance management request from the second operating system Performance state; and applying the power state and the performance state to the mobile platform, wherein the first power management request identifies a first power state and the second power management request identifies a second power state, and The virtual machine management logic selects the shallowest power state from the first and second power states.Another aspect of the present application provides a device for power management of a mobile platform, including: a device for power management based on a first power management request from a first operating system running on a mobile platform and a first power management request from a first operating system running on the mobile platform. The second power management request of the second operating system determines the power state component, wherein one or more components of the first operating system and the mobile platform define a first virtual machine, and the second operating system and the One or more components of the mobile platform define a second virtual machine; a component for determining the performance state based on the first performance management request from the first operating system and the second performance management request from the second operating system And means for applying the power state and the performance state to the mobile platform, wherein the first power management request identifies a first power state and the second power management request identifies a second power state, The means for determining the power state includes means for selecting the shallowest power state from the first and second power states.Another aspect of the present application provides a system for power management of a mobile platform, including: a mobile platform running a first operating system and a second operating system, wherein one of the first operating system and the mobile platform or More components need to define a first virtual machine, and one or more components of the second operating system and the mobile platform need to define a second virtual machine; and virtual machine management logic for: The first power management request from the operating system and the second power management request from the second operating system are used to determine the power state based on the first performance management request from the first operating system and the second power management request from the second operating system. A second performance management request to determine a performance state, and to apply the power state and the performance state to the mobile platform, wherein the first power management request identifies the first power state and the second power management request identifies The second power state, and the virtual machine management logic is to select the shallowest power state from the first and second power states.Description of the drawingsBy reading the following specification and appended claims, and by referring to the following drawings, various advantages of the embodiments of the present invention will become apparent to those skilled in the art, in which:Fig. 1 is a block diagram of an example of a power management virtualization solution according to an embodiment;Fig. 2 is a block diagram of an example of a power management architecture with a virtualized coprocessor according to an embodiment;Fig. 3 is a block diagram of an example of a mobile platform according to an embodiment; andFig. 4 is a flowchart of an example of a method for managing power in a mobile platform according to an embodiment.Detailed waysEmbodiments may provide a method in which the power state is determined based on a first power management request from a first operating system (OS) running on a mobile platform and a second OS running on the mobile platform. One or more components of the first OS and mobile platform can define a first virtual machine (VM), and one or more components of the second OS and mobile platform can define a second VM. The method can also provide for applying the power state to the mobile platform.Other embodiments may include a device with virtual machine management (VMM) logic based on the first power management request from the first OS to be run on the mobile platform and from the mobile platform The second power management request of the second OS running on it determines the power state. One or more components of the first OS and the mobile platform may define the first VM, and one or more components of the second OS and the mobile platform may define the second VM. The VMM logic can also apply the power state to the mobile platform.In addition, embodiments may include a system including a mobile platform running a first OS and a second OS, wherein one or more components of the first OS and the mobile platform define a first VM. One or more components of the second OS and mobile platform may define the second VM. The system can also include VMM logic to determine the power state based on the first power management request from the first OS and the second power management request from the second OS. The VMM logic can also apply the power state to the mobile platform.Other embodiments may provide a computer-readable storage medium including a set of instructions that, if executed, cause a mobile platform to be based on a first power management request from a first OS to be run on the mobile platform and from The second power management request of the second OS running on the mobile platform determines the power state. One or more components of the first OS and the mobile platform may define the first VM, and one or more components of the second OS and the mobile platform may define the second VM. The instructions can also provide the power state to the application of the platform.Turning now to Figure 1, there is shown a scheme 10 for managing power in a virtualized mobile platform. In the example shown, multiple operating systems 12 (12a-12c) run on the platform to provide a wide variety of functionality. For example, the main OS12a may constitute a closed telephony stack configured to support off-platform wireless communications (for example, W-CDMA (UMTS), CDMA2000 (IS-856/IS-2000), etc.), while the guest OS12b may provide desktop functionality (for example, 7) and so on. Generally, one or more components of each OS12 and mobile platform can define a virtual machine.Additionally, the operating system 12 may be capable of performing enhanced dynamic/run-time power management control of peripheral devices or other platform components. Therefore, each OS 12 can issue power management decisions/requests 14 (14a-14c, for example via software drivers) to reduce power consumption and extend battery life for the mobile platform. For example, the power management request 14 may involve device power state, processor power state, platform power state, device performance state, etc. (see, for example, Advanced Configuration and Power Interface Specification, Rev. 4.0, June 16, 2009).The solution 10 shown also includes virtual machine management (VMM) logic 16 to evaluate the request 14 and determine which power management request is honored based on platform-wide considerations (e.g., global relevance and compatibility) , Where the permitted power management request 18 can identify the power state. In the case of a power state change coordinated by the device driver, the requested device power state may change a separate system-on-chip (SoC) hardware block (e.g., power island) (e.g., wireless, storage, or imaging) during periods of inactivity. The circuit block) is placed in low power mode. Therefore, the OS device driver may decide to power down the device to the D3 power state, or power up it back to the D0 state. The OS power management layer (OSPM) can indicate this request to the VMM logic 16, which can then make a globally relevant decision-to accept the request and allow the device to reduce power, or if some other guest OS is currently using the device The request is rejected.For example, if the host OS 12a requests the device power state Dx and the performance state Px, the guest OS 12b requests the device power state Dx1 and the performance state Px1, and the guest OS 12c requests the device power state Dx2 and the performance state Px2, the VMM logic 16 can perform the following analysis:At time t, Dx3=Min(Dx2, Dx1, Dx) (1)At time t, Px3=Min(Px2, Px1, Px) (2)Among them, the permitted power management request 18 can identify the device power state Dx3 and the performance state Px3. Therefore, the illustrated VMM logic 16 selects the shallowest power state and applies it to the mobile platform in real time.As already mentioned, the VMM logic 16 can also facilitate performance state and other power state transitions, such as processor power state transitions and platform power state transitions. For example, scheme 10 may require an active idle state (for example, SOix) of the entire platform, which can be a low latency standby state (low latency standby state) adopted when the platform is idle. In this case, the VMM logic 16 can detect the idleness of the entire platform based on idle detection in each OS 12, and guide the entire SoC into a low-power idle standby state when appropriate. This approach may be particularly advantageous if one or more of the guest operating systems do not have full platform idle detection logic.Figure 2 shows an architecture 20 in which the VMM logic 16 (Figure 1) is implemented as a virtualized coprocessor with a power management unit (PMU) 22, which uses a collection of memory mapped registers 24 To evaluate the power management request from OS Power Management Layer (OSPM) 26 (26a, 26b), where the register 24 is dedicated to power management. The register 24 can reside in any suitable location in the architecture 20. Specifically, the illustrated architecture includes a mixed signal integrated circuit (MSIC) power delivery module 28, which generates a central processing unit (CPU) voltage rail 30 and an external voltage rail 32. The CPU voltage rail 30 can be supplied to one or more cores 38 of the processor 36, and the external voltage rail 32 can be supplied to the processor 36 and various other hardware blocks on the platform controller hub (PCH) 40 . The PMU 34 of the processor 36 can execute the on-die clock and/or power gating of the hardware block of the processor 36, and the PMU 22 of the PCH 40 can execute the on-die clock and/or the hardware block of the PCH 40. Or power gating.The PCH 40 can expose the memory mapped register 24 to the OSPM 26, which can write power management related data to the register 24 in real time. In the example shown, the registers 24 include a status register (VT_PM_STS), a command register (VT_PM_CMD), a subsystem control register (VT_PM_SSC), and a subsystem status register (VT_PM_SSS). Therefore, OSPM26 may write the corresponding power management request (for example, the request to change the device from D0 to D3) to the command register and control register, and PMU22 can read the power management decision in the whole platform in the virtualized environment The contents of register 24. In addition, the PMU 22 can report the result of the whole platform determination by writing to the status register, and the OSPM 26 can read the content of the register 24 to determine the result.Turning now to FIG. 3, a virtualization system 42 is shown. The system 42 may have computing functionality (e.g., personal digital assistant/PDA, laptop computer), communication functionality (e.g., wireless smart phone), imaging functionality, media playback functionality, or any combination thereof (e.g., mobile Internet device/MID) part of the mobile platform. In the example shown, the system 42 includes a processor 44, a graphics memory controller hub (GMCH) 46, a graphics controller 48, a platform controller hub (PCH) 50, a system memory 52, and basic input /Output system (BIOS) memory 54, network controller 56, solid state disk (SSD) 58, and one or more other controllers 60. The processor 44 (which may include a core area with one or several processor cores 62) may be able to bring its cores 62 into one or more active and/or idle states based on performance and/or power management concerns, as already mentioned Arrived.The processor 44 and the GMCH 46 shown are integrated on a common system on chip (SoC). Alternatively, the processor 44 can communicate with the GMCH 46 through an interface such as a front side bus (FSB), a point-to-point interconnect structure, or any other suitable interface. GMCH 46 (sometimes referred to as the north bridge or north complex of the chipset) can communicate with the system memory 52 via a memory bus, where the system memory 52 may include a dynamic random access memory (DRAM) module, which can be combined To single in-line memory module (SIMM), dual in-line memory module (DIMM), small DIMM (smalloutline DIMM, SODIMM), etc.The GMCH 46 can also communicate with the graphics controller 48 via a graphics bus such as: PCI Express graphics (PEG, such as Peripheral Component Interconnect/PCI High Speed x16 Graphics 150W-ATX Specification 1.0, PCI Special Interest Group) bus or Accelerated graphics port (for example, AGP V3.0 interface specification, September 2002) bus. The processor 44 can also communicate with the PCH 50 through a hub bus. In one embodiment, the hub bus is a DMI (Direct Media Interface) bus. The PCH 50 can also be combined with the processor 44 and the GMCH 46 on a common SoC.The PCH 50 shown (sometimes called the South Bridge or South Complex of the chipset) acts as a host device and communicates with the network controller 56, which can provide a platform for a wide variety of purposes such as the following Communication functionality: cellular phone (e.g., W-CDMA (UMTS), CDMA 2000 (IS-856/IS-2000), etc.), WiFi (e.g., IEEE802.11, 1999 version, LAN/WAN wireless LANS), Bluetooth ( For example, IEEE 802.15.1-2005, wireless personal area network), WiMax (for example, IEEE 802.16-2004, LAN/WAN broadband wireless LANS), global positioning system (GPS), spread spectrum (for example, 900MHZ) and other radio frequencies ( RF) Telephone use. The PCH 50 may also include one or more wireless hardware circuit blocks 64 to support this functionality.SSD 58 may include one or more NAND chips and may be used to provide high-capacity data storage devices and/or a significant amount of parallelism. There may also be solutions that include a NAND controller implemented as a separate application specific integrated circuit (ASIC) controller, which is connected to the PCH 50 on a standard bus such as: Serial ATA (SATA, such as SATA Rev.3.0 Specification, May 27, 2009, SATA International Organization/SATA-IO) bus or PCI high-speed graphics (PEG, such as Peripheral Component Interconnect/PCI high-speed x16 graphics 150W-ATX specification 1.0, PCI Special Interest Group) bus. Therefore, the SSD 58 may be configured to communicate with one or more memory circuit blocks 66 of the PCH 50 according to, for example, the following protocol: Open NAND Flash Interface (for example, ONFI specification, Rev. 2.2, October 2009 7th) Agreement or other appropriate agreement. The SSD 58 can also be used as a USB (Universal Serial Bus, for example, USB Specification 2.0, USB Implementers Forum) flash storage device.Other controllers 60 can communicate with the PCH 50 to provide support for imaging devices and support for user interface devices (such as displays, keyboards, mice, etc.) to allow users to interact with the system 10 and perceive information from the system 10. In the case of an imaging device, the PCH 50 may include one or more imaging circuit blocks 68 to support this functionality.System 42 may run multiple operating systems, where each OS and one or more components of system 42 define a VM. In addition, each OS can include OSPM, which can make power management decisions for the device, core, and/or system 42 as a whole, and submit these decisions to the VMM as power management requests. As already mentioned, the VMM logic is able to evaluate the power management request and consider the concerns of the entire platform to make appropriate decisions. For example, the VMM logic can place one or more of the hardware blocks 64, 66, 68 in a low power mode corresponding to the power state of the permitted power management request 18 (FIG. 1).Figure 4 shows a method 70 of managing power in a virtualized mobile platform. Can use assembly language programming and circuit technology (such as ASIC, complementary metal oxide semiconductor (CMOS) or transistor-transistor logic (TTL) technology) fixed functional hardware, with runnable software (such as in random access memory ( RAM), read-only memory (ROM), programmable ROM (PROM), flash memory, etc., a machine-readable storage medium or a set of logical instructions and/or firmware stored in a computer-readable storage medium), or Any combination to implement method 70.The processing block 72 provides for reading a set of memory map registers dedicated to power management. As already mentioned, registers (which may include one or more status registers, command registers, and control registers and can reside on the platform controller hub (PCH) 50 (Figure 4), the processor 44 (Figure 4), or the appropriate platform Other locations) can include power management requests from multiple operating systems running on the platform. At block 74, the power state can be determined based on the currently pending power management request, and at block 76, the power state can be applied to the platform.Therefore, the technology described in this article can provide an architectural framework for realizing power management decisions in the next-generation mobile platform based on VT. For example, the solution may be able to virtualize the power state of power efficient components (such as devices, CPUs, chipsets, accelerator blocks across multiple operating systems).The embodiments of the present invention can be applied to use with all types of semiconductor integrated circuit ("IC") chips. Examples of these IC chips include, but are not limited to, processors, controllers, chipset components, programmable logic arrays (PLA), memory chips, network chips, system on chip (SoC), SSD/NAND controller ASICs, and the like. In addition, in some drawings, a signal conductor line is represented by a line. Some may be thicker to indicate more constituent signal paths, have digital labels to indicate many constituent signal paths, and/or have arrows at one or more ends to indicate the main information flow direction. However, this should not be interpreted in a restrictive way. Rather, these added details can be used in conjunction with one or more exemplary embodiments to help make the circuit easier to understand. Any signal line represented (with or without additional information) may actually include one or more signals that can propagate in multiple directions and can be implemented by any suitable type of signal scheme, such as a differential pair. ) Digital or analog lines, optical fiber lines and/or single-ended lines.Example sizes/models/values/ranges may have been given, but embodiments of the present invention are not limited thereto. As manufacturing technologies (such as photocopying) mature over time, it is expected that devices of smaller sizes can be manufactured. In addition, for the sake of simplicity of illustration and discussion and in order not to affect the understanding of certain aspects of the embodiments of the present invention, the well-known power/ground connections to the IC chip and other components may or may not be shown in the drawings. Shown within. In addition, in order to avoid affecting the understanding of the embodiments of the present invention and in view of the following facts, the settings can be shown in the form of block diagrams: specific details about the implementation of such block diagram settings are highly dependent on the platform on which the embodiments are to be implemented, that is, Such specific details should be fully within the knowledge of those skilled in the art. In the case where specific details (for example, circuits) are described in order to describe the exemplary embodiments of the present invention, it should be obvious to those skilled in the art that the embodiments of the present invention can be practiced without these specific details or changes in these specific details. . The description is therefore considered to be illustrative, not restrictive.The term "coupled" is used herein to refer to any type of relationship (direct or indirect) between the discussed components, and can be applied to electrical, mechanical, liquid, optical, electromagnetic, electromechanical Or other connections. In addition, the terms "first", "second", etc. are used herein only to facilitate discussion, and do not carry a special time or age meaning unless otherwise indicated.From the foregoing description, those skilled in the art will appreciate a wide range of techniques that can implement embodiments of the present invention in various forms. Therefore, although the embodiments of the present invention have been described in conjunction with specific examples of the embodiments of the present invention, the true scope of the embodiments of the present invention should not be so limited, because after studying the drawings, the specification and the appended claims, other Modifications will become apparent to those skilled in the art. |
A circuit comprising a clocked device having at least one comparison circuit. The comparison circuit checks at least one signal from a network to obtain a reference voltage in order to determine that voltage applied to the core circuitry is acceptable for a mode of operation of the processor. The clocked device is also coupled to a voltage regulator. |
What is claimed is: 1. A method comprising:coupling a clocked device to a comparison circuit; coupling the comparison circuit to a first voltage reference; coupling a voltage regulator to a core circuitry; and comparing a voltage from the first voltage reference coupled to a voltage applied to the clocked device, if the voltage applied to a core circuitry is below voltage set by a second reference, the comparison circuit sends a signal to the core circuitry indicating the voltage applied to the core circuitry is sufficiently low for a power saving operation. 2. The method of claim 1, further comprising:comparing a voltage from a second voltage reference. 3. The method of claim 1, further comprising:performing an action by the clocked device. 4. The method of claim 3, wherein the action is one of the clocked device providing a warning that the applied voltage is insufficient, the clocked device indicates that a fault condition exists, the clocked device halts, the clocked device issues an interrupt, the clocked device toggles an external signal, and the clocked device operates at a higher speed.5. The method of claim 1, wherein if the voltage applied to a core circuitry is below a minimum voltage set by a first reference, the comparison circuit sends a first signal to the core circuitry indicating that the voltage applied to the core circuitry is insufficient for a faster speed operation.6. The method of claim 1, wherein if the voltage applied to a core circuitry exceeds a minimum first voltage reference, the comparison circuit sends a second signal to the core circuitry indicating that the voltage applied to the core circuitry is sufficient for a faster speed operation.7. The method of claim 1, wherein if the voltage applied to a core circuitry exceeds a second reference, the comparison circuit sends a signal to the core circuitry indicating that the voltage applied to the core circuitry is not sufficiently low for a power saving operation.8. The method of claim 7, further comprising:performing an action by the clocked device. 9. The method of claim 8, herein the action is one of the clocked device providing a warning that the applied voltage is insufficient, the clocked device indicates that a fault condition exists, the clocked device halts, the clocked device issues an interrupt, the clocked device toggles an external signal, and the clocked device operates at a higher speed.10. A circuit comprising:a core circuitry coupled to a first comparator which checks at least one first signal from a network, the first signal has at least one of a first voltage reference and a second voltage reference; the first comparator compares the first voltage reference with a voltage applied to a core circuitry; and the core circuitry is coupled to a voltage regulator, if the voltage applied to a core circuitry is below a second reference voltage, the second comparator sends a signal to the core circuitry indicating that the voltage applied to the core circuitry is sufficiently low for a power saving operation. 11. The circuit of claim 10, further comprising:a second comparator is coupled to the core circuitry. 12. The circuit of claim 10, wherein if the voltage applied to a core circuitry is below a minimum voltage set by a first reference, the first comparator sends a first signal to the core circuitry indicating that the voltage applied to the core circuitry is insufficient for a faster speed operation.13. The circuit of claim 10, wherein if the voltage applied to a core circuitry exceeds a minimum voltage set by a first reference, the first comparator sends a second signal to the core circuitry indicating that the voltage applied to the core circuitry is sufficient for a faster speed operation.14. The circuit of claim 10, wherein if the voltage applied to a core circuitry exceeds a second reference voltage, the second comparator sends a signal to the core circuitry indicating that the voltage applied to the core circuitry is not sufficiently low for a power saving operation. |
BACKGROUND OF THE INVENTION1. Field of the InventionThis invention relates to a method by which a multi-frequency device checks the core voltage applied to the device and compares the core voltage to a reference to ensure that the appropriate voltage is being applied for the frequency at which the device is being operated.2. BackgroundClocked devices such as processors used in computers may operate at multiple speeds. For example, a processor may have a fast speed and a slow speed. A processor in a power saving mode generally operates at a low speed and a relatively low supply voltage ("Vcc"). A processor in a performance mode generally operates at a fast speed and a relatively high Vcc.If the applied voltage is not at the correct level for the processor to operate at the fast speed, the multi-frequency device generally attempts to load a first piece of software that starts a computer or "boots up" a computer with the processor running at the slow speed, and once the boot process has completed, the processor tries to run at the higher speed. One disadvantage of such a device is that without the proper voltage being applied to the processor, the processor will not be able to operate reliably at the higher speed. Another disadvantage to these processors is that the user is not provided information as to the reason for which the processor is not operating. Therefore, it is desirable to have an apparatus that is capable of overcoming the disadvantages associated with conventional devices.BRIEF DESCRIPTION OF THE DRAWINGSThe accompanying drawings are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification. The drawings illustrate embodiments of the invention and, together with the description, serve to explain the principles of the invention.FIG. 1 illustrates a computer system in accordance with one embodiment of the invention;FIG. 2 illustrates a circuit in accordance with one embodiment of the invention;FIG. 3 illustrates a circuit in accordance with one embodiment of the invention; andFIG. 4 illustrates a flow diagram in accordance with one embodiment of the invention.DETAILED DESCRIPTIONOne embodiment of the invention relates to a multi-frequency device configured to operate at multiple frequencies. The multi-frequency device is capable of checking the voltage being applied to its core circuit and comparing this voltage to a reference voltage to ensure the appropriate voltage is being applied to the core circuit relative to the frequency at which the multi-frequency device is being operated. If the applied voltage is not correct for the attempted speed, the multi-frequency device performs at least one action of several actions. For example, the multi-frequency device may indicate to the user that the voltage applied to the core circuitry is insufficient for the multi-frequency device to operate at the fast speed.Another embodiment of the invention relates to the processor being configured such that the processor only runs at the speed acceptable for the applied voltage. In yet another embodiment of the invention, the processor may be able to execute code that logs the system error. This prevents data corruption or data loss.In the following description, numerous specific details are set forth to provide a thorough understanding of the invention. However, it will be understood by one of ordinary skill in the art that the invention may be practiced without these specific details. In other instances, well known structures and techniques have not been shown in detail to avoid obscuring the invention. Presented below is a computer system that is one example of a clocked device that may be used to implement techniques of the invention. Thereafter, two examples of circuits that may be used are presented. It will be appreciated, however, that the number of ways to configure a circuit that implements techniques of the invention and the configuration of the circuit is limited only by the creativity of one skilled in the art.FIG. 1 illustrates one embodiment of a computer system 10 that implements the principles of the present invention. Computer system 10 comprises a processor 130, a memory 18, and interconnect 15 such as bus or a point-to-point link. Processor 130 is coupled to the memory device 18 by interconnect 15. In addition, a number of user input/output devices, such as a keyboard 20 and a display 25, are coupled to chip set (not shown) which is then connected to processor 130. The chipset (not shown) is typically connected to processor 130 using an interconnect that is different from interconnect 15.Processor 130 represents a central processing unit of any type of architecture (e.g., the Intel architecture, Hewlett Packard architecture, Sun Microsystems architecture, IBM architecture, etc.), or hybrid architecture. In addition, processor 130 could be implemented on one or more chips. Memory 18 represents one or more mechanisms for storing data. Memory 18 may include read only memory (ROM), random access memory (RAM), magnetic disk storage media, optical storage media, flash memory devices, and/or other machine-readable media. Interconnect 15 represents one or more buses (e.g., accelerated graphics port bus, peripheral component interconnect bus, industry standard architecture bus, X-Bus, video electronics standards association related to buses, etc.) and bridges (also termed as bus controllers).While this embodiment is described in relation to a single processor computer system, the invention could be implemented in a multi-processor computer system. In addition to other devices, one or more of a network 30 may be present. Network 30 represents one or more network connections for transmitting data. The invention could also be implemented on multiple computers connected via such a network.FIG. 1 also illustrates that the memory device 18 has stored therein data 35 and program instructions (e.g. software, computer program, etc.) 36. Data 35 represents data stored in one or more of the formats described herein. Program instructions 36 represents the necessary code for performing any and/or all of the techniques described with reference to FIGS. 2-4. It will be recognized by one of ordinary skill in the art that the memory device 18 preferably contains additional software (not shown), which is not necessary to understanding the invention.FIG. 1 additionally illustrates that the processor 130 includes decoder 40. Decoder 40 is used for decoding instructions received by processor 130 into control signals and/or microcode entry points. In response to these control signals and/or microcode entry points, decoder 40 performs the appropriate operations.FIG. 2 illustrates one circuit used in a multi-frequency device to implement techniques of the invention. Circuit 100 comprises clocked device 135 coupled to network 140 through connection 122 and to comparison circuit 120 that may be internal or external to clocked device 135. Comparison circuit 120 is configured to handle digital or analog signals from the external network 140. Coupled to comparison circuit 120 and to clocked device 135 is core circuitry 260. Core circuitry 260 may include a decoder for decoding instructions, memory and logic gates. Clocked device 135 is also coupled to voltage regulator 110. Voltage regulator 110 controls the voltage ("Vcc") of the power supply for a signal that is applied to clocked device 135. For example, if the device is to operate at a fast speed such as at 750 megahertz ("MHz"), the voltage that should be applied to clocked device 135 by the voltage regulator 110 is at a higher level, e.g., 1.6 volts ("V"). If the multi-frequency device is to operate at a slow speed such as 600 MHz, the applied voltage should be at a lower level, e.g., 1.35 V.Comparison circuit 120 samples and checks the voltage Vcc being applied to clocked device 135. Comparison circuit 120 then compares the voltage applied to clocked device 135 to the reference obtained from network 140. If the voltage being applied to clocked device 135 is too low or too high for the desired speed, comparison circuit 120 issues a signal to core circuitry 260 indicating the out of range Vcc. Core circuitry 260 then performs one or more of multiple actions. These actions, performed by the hardware and/or the software of the device, include the device issuing a warning indicating that the applied voltage is incorrect to operate at the desired speed. Another action involves the device entering a fault condition that stops the device and optionally indicates to the user that the voltage applied to the device is insufficient. The device may also halt its operation. The processor of the device may also issue an interrupt to the operating system of the device. Another action is that the device may toggle an external signal. Yet another action is that the device may try to operate at the higher speed. Yet another action is that the device may run only at the speed acceptable for the applied Vcc.FIG. 3 illustrates circuit 200 that implements techniques of one embodiment of the invention. Circuit 200 includes at least one or more comparators (240, 250), and core circuitry 260 located in clocked device 135. Voltage regulator 210 is coupled to clocked device 135. Coupled to two comparators (240, 250) are constant reference sources (220, 230) that are obtained from a connection to a network (not shown) that may be external or internal to circuit 200. Each constant reference source is a reference voltage. For example, constant reference source 220 establishes a reference voltage for the device to operate at a fast speed whereas reference source 230 establishes the voltage reference for the device to operate at a slow speed. Reference sources (also referred to herein as first reference source and second source reference (220, 230) establish the voltage references for each speed that the device operates. It will be appreciated that additional constant reference sources may be added if there are more than two speeds for clocked device 135.Circuit 200 operates in the following fashion. Voltage regulator 210 controls the voltage Vcc of the power supply (not shown) that is applied to clocked device 135. Vcc is then applied to core circuitry 260.First comparator 240 samples and checks the voltage being applied to core circuitry 260. After determining the voltage applied to core circuitry 260, first comparator 240 checks reference source 220 to determine whether the voltage applied to core circuitry 260 meets the minimum voltage level established by reference source 220 for the device to operate at a fast speed. If the applied voltage is below the voltage level provided by reference source 220, a signal such as HIGH_V_OK goes inactive. This indicates to core circuitry 260 that the voltage applied to core circuitry 260 is not high enough for the device to operate at a fast speed. As a result, clocked device 135 may perform one or more of several actions. These actions, performed by the hardware and/or the software, include the device issuing a warning indicating that the applied voltage is incorrect to operate at the desired speed. Another action involves the device entering a fault condition that stops the device and optionally indicates to the user that the voltage applied to the device is insufficient. The device may also halt its operation. The clocked device may also issue an interrupt to the operating system. Another action is that the device may toggle an external signal. Yet another action is that the device may try to operate at the higher speed. Yet another action is that the device may run only at the speed acceptable for the applied VccIn contrast, if the applied voltage equals or is greater than the minimum voltage level established by reference source 220, the HIGH_V_OK signal to core circuitry 260 goes active. This indicates that the applied core voltage has been raised to a sufficient level for the device to operate at a fast speed.It will be appreciated that although an applied voltage that is greater than the minimum voltage causes the HIGH_V_OK signal to core circuitry 260 to go active, the multi-frequency device, in this scenario, does not specifically test for a voltage that is greater than the maximum voltage that is allowed. In order to test for a higher voltage, a third comparator and a third reference source should be added to circuit 200. A system designer may find the addition of a third comparator and a third reference source a desirable option in order to reduce the possibility of damage to the circuit, the amount of power wasted, and the amount of heat generated from the excess voltage.Instead of operating at a fast speed, another speed may be desired such as a medium speed, slow speed, or any other desired speed. If the multi-frequency device is to operate at a medium speed (or an alternative speed), the voltage that should be applied to the clocked device will be designated by the system designer. If the applied voltage is lower than that which is the expected voltage, then the operating system may not boot up properly, random failures may occur, data may be lost or other errors may result. On the other hand, if the applied voltage is greater than that which is expected, power is wasted and unnecessary heat is generated creating an additional burden on the cooling system of the computer system.In addition to operating at a medium speed, the clocked device 135 may also operate at a slow speed. Generally, operating at a slow speed is considered a power saving mode. In the power saving mode, voltage regulator 210 controls a signal having a voltage Vcc that is applied to clocked device 135. Vcc is then applied to core circuitry 260.Second comparator 250 samples and checks the voltage being applied to core circuitry 260. Second comparator 250 then checks reference source 230. Reference voltage 230 establishes a maximum value below which the voltage applied to core circuitry 260 must be lowered for power saving to occur.If the voltage applied to core circuitry 260 is below the reference voltage established by reference voltage 230, a signal goes active such as LOW_V_OK and is sent to core circuitry 260. This indicates to the core logic that Vcc is sufficiently low to allow for a power saving operation. In comparison, if the voltage applied to core circuitry 260 exceeds the reference voltage established by reference source 230, the LOW_V_OK signal to core circuitry 260 goes inactive. This indicates to the core logic that the voltage applied to core circuitry 260 has not been lowered to the expected value for a power saving operation to occur.Although an applied voltage that is lower than the maximum voltage causes the LOW_V_OK signal to core circuitry 260 to go inactive, the multi-frequency device, in this scenario, does not specifically test for a voltage that is less than the minimum voltage that is allowed. In order to test for a lower voltage, a fourth comparator and a fourth reference source should be added to circuit 200. A system designer may find the addition of a fourth comparator and a fourth reference source a desirable option in order to ensure that the operating system properly boots up and data loss is minimized.FIG. 4 illustrates a flow diagram of one embodiment of the invention. At block 300, a clocked device 135 is coupled to a comparison circuit or at least one or more comparators. At block 310, the comparison circuit (or comparator) is coupled to a network. The network provides a reference voltage that indicates the proper voltage that should be applied to the core circuitry of a clocked device 135 in order for the clocked device to reliably operate at the desired speed. At block 320, a voltage regulator is coupled to the network to control the voltage of the signal from a power source applied to the core circuitry of the clocked device. At block 330, the reference source that provides a voltage reference from the network is compared to the voltage applied to the clocked device. In the preceding detailed description, the invention is described with reference to specific embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. |
Software for use on a client device that is configured for communications via a communications network instantiates a communications function that effects an advertisement download communication link between the client device and an advertisement distribution server system via the communications network, at selected advertisement download times, an advertisement download function that downloads advertisements from the advertisement distribution server system via the advertisement download communication link, an advertisement storage function for storing downloaded advertisements on a storage medium associated with the client device, a user activity monitor function that monitors user activity, and an advertisement display function that effects display of a plurality of the stored advertisements, wherein at least selected ones of the plurality of stored advertisements are displayed for a duration that is at least partially based on the monitored user activity. |
CLAIMS 1. Software for use on a client device that is configured for communications via a communications network, comprising: a communications function that effects an advertisement download communication link between the client device and an advertisement distribution server system via the communications network, at selected advertisement download times; an advertisement download function that downloads advertisements from the advertisement distribution server system via the advertisement download communication link; an advertisement storage function for storing downloaded advertisements on a storage medium associated with the client device; a user activity monitor function that monitors user activity; and an advertisement display function that effects display of a plurality of the stored advertisements, wherein at least selected ones of the plurality of stored advertisements are displayed for a duration that is at least partially based on the monitored user activity. 2. The software as set forth in Claim 1, wherein the advertisement distribution server system is controlled by a vendor of the software. 3. The software as set forth in Claim 1, wherein the communications network comprises the Internet. 4. The software as set forth in Claim 1, wherein the software is subsidized by revenues attributable to the downloaded advertisements. 5. The software as set forth in Claim 1, wherein the advertisement display function displays the stored advertisements in accordance with ad display parameters prescribed by the advertisement distribution server system, the ad display parameters including at least one of the following parameters: the maximum time that the associated advertisement is to be displayed each time that it is displayed; the maximum cumulative time that the associated advertisement is to be displayed; the maximum number of times per day that the associated advertisement is to be displayed; the start date/time before which the associated advertisement should not be displayed; and the end date/time after which the associated advertisement should not be displayed. 6. The software as set forth in Claim 1, wherein the advertisement display function effects display of the at least selected ones of the plurality of the stored advertisements in accordance with ad display parameters prescribed by the advertisement distribution server system, the ad display parameters including at least two of the following parameters: the maximum time that the associated advertisement is to be displayed each time that it is displayed; the maximum cumulative time that the associated advertisement is to be displayed; the maximum number of times per day that the associated advertisement is to be displayed; the start date/time before which the associated advertisement should not be displayed; and the end date/time after which the associated advertisement should not be displayed. 7. The software as set forth in Claim 1, wherein the advertisements include main screen advertisements and toolbar advertisements. 8. The software as set forth in Claim 7, wherein the advertisement display function effects display of the toolbar advertisements in accordance with ad display parameters prescribed by the advertisement distribution server system, wherein the ad display parameters associated with each of the toolbar advertisements include: the start date/time before which the associated advertisement should not be displayed; and the end date/time after which the associated advertisement should not be displayed. advertisement should not be displayed. 9. The software as set forth in Claim 1, wherein the advertisement display function effects display of the at least selected ones of the plurality of the stored advertisements in a linear manner. 10. The software as set forth in Claim 1, wherein the advertisement display function effects display of the at least selected ones of the plurality of the stored advertisements in a random manner. 11. The software as set forth in Claim 1, wherein the advertisement display function effects display of the at least selected ones of the plurality of the stored advertisements in a linear sequence according to the order in which the advertisements are stored on the storage medium. 12. The software as set forth in Claim 1, wherein the advertisement display function effects display of the at least selected ones of the plurality of the stored advertisements in an order prescribed by the advertisement distribution server system. 13. The software as set forth in Claim 1, wherein the advertisement display function effects display of the at least selected ones of the plurality of the stored advertisements in accordance with ad display parameters prescribed by a vendor of the software. 14. The software as set forth in Claim 1, wherein: each of the at least selected ones of the plurality of stored advertisements have an associated face time duration parameter that specifies a face time duration for which that advertisement should be displayed; the advertisement display function effects display of each of the at least selected ones of the plurality of the for the face time duration prescribed by the associated face time duration parameter; and the face time duration comprises a time period during which the user activity monitor function detects at least a prescribed minimum level of user activity. 15. The software as set forth in Claim 14, wherein the user activity comprises any user action that is indicative of user interaction with the software. 16. The software as set forth in Claim 14, wherein the user activity comprises any user action that is indicative of the user viewing a display screen associated with the client device. 17. The software as set forth in Claim 14, wherein the user activity comprises any of the following user actions: movement of a pointer device associated with the client device; and use of an input device associated with the client device. 18. The software as set forth in Claim 14, wherein the user activity comprises any of the following user actions: movement of a mouse associated with the client device; clicking of a mouse button associated with the mouse; and movement of one or more keys of a keyboard associated with the client device. 23. The software as set forth in Claim 1, wherein the advertisement display function effects display of the at least selected ones of the plurality of the stored advertisements in accordance with ad display parameters prescribed by the advertisement distribution server system. 24. The software as set forth in Claim 23, wherein the ad display parameters specify, for each of prescribed ones of the at least selected ones of the plurality of stored advertisements, how many times that advertisement is to be displayed for a given time period, and how long that advertisement is to be displayed each time that it is displayed. 25. The software as set forth in Claim 23, wherein the ad display parameters specify, for each of prescribed ones of the at least selected ones of the plurality of stored advertisements, how many times that advertisement is to be displayed for a given time period. 26. The software as set forth in Claim 23, wherein the ad display parameters specify, for each of prescribed ones of the at least selected ones of the plurality of stored advertisements, how long that advertisement is to be displayed each time that it is displayed. 27. The software as set forth in Claim 23, wherein the ad display parameters specify, for each of prescribed ones of the at least selected ones of the plurality of stored advertisements, a start date/time before which the associated advertisement should not be displayed, and the end date/time after which the associated advertisement should not be displayed. 28. The software as set forth in Claim 23, wherein the ad display parameters specify, for each of prescribed ones of the at least selected ones of the plurality of stored advertisements, the total/cumulative amount of time that advertisement is to be displayed. 29. The software as set forth in Claim 23, wherein the ad display parameters include, for each of prescribed ones of the at least selected ones of the plurality of stored advertisements, any one or more of the following parameters: a maximum face time that the associated advertisement is to be displayed each time that it is displayed; a maximum cumulative face time that the associated advertisement is to be displayed; the maximum number of times per day that the associated advertisement is to be displayed; the start date/time before which the associated advertisement should not be displayed; and the end date/time after which the associated advertisement should not be displayed; wherein the face time comprises a time period during which a prescribed minimum level of user activity is detected by the user activity monitor function. 30. The software as set forth in Claim 23, wherein the ad display parameters include, for each of prescribed ones of the at least selected ones of the plurality of stored advertisements, any two or more of the following parameters: a maximum face time that the associated advertisement is to be displayed each time that it is displayed; a maximum cumulative face time that the associated advertisement is to be displayed; the maximum number of times per day that the associated advertisement is to be displayed; the start date/time before which the associated advertisement should not be displayed; and the end date/time after which the associated advertisement should not be displayed; wherein the face time comprises a time period during which a prescribed minimum level of user activity is detected by the user activity monitor function. 31. The software as set forth in Claim 23, wherein the advertisement download function downloads advertisements identified in at least one playlist generated by at least one playlist server. 32. The software as set forth in Claim 1, wherein the advertisement download function downloads advertisements identified in at least one playlist generated by at least one playlist server. 33. The software as set forth in Claim 1, further comprising a cookie generator function that generates a cookie containing information describing user/client device behavior and/or user demographics, and that transmits the cookie to the at least one playlist server. 34. The software as set forth in Claim 33, wherein the at least one playlist is generated by the at least one playlist server based at least partially on the cookie. 35. The software as set forth in Claim 32, wherein the at least one playlist is customized to the user/client device. 36. The software as set forth in Claim 32, wherein the at least one playlist is tailored to the user/client device. 37. The software as set forth in Claim 31, wherein the at least one playlist includes the ad display parameters. 38. The software as set forth in Claim 37, wherein the ad display parameters include, for each of prescribed ones of the at least selected ones of the plurality of stored advertisements, any one or more of the following parameters: a maximum face time that the associated advertisement is to be displayed each time that it is displayed; a maximum cumulative face time that the associated advertisement is to be displayed; the maximum number of times per day that the associated advertisement is to be displayed; the start date/time before which the associated advertisement should not be displayed; and the end date/time after which the associated advertisement should not be displayed; wherein the face time comprises a time period during which a prescribed minimum level of user activity is detected by the user activity monitor function. 39. The software as set forth in Claim 37, wherein the ad display parameters include, for each of prescribed ones of the at least selected ones of the plurality of stored advertisements, any two or more of the following parameters: a maximum face time that the associated advertisement is to be displayed each time that it is displayed; a maximum cumulative face time that the associated advertisement is to be displayed; the maximum number of times per day that the associated advertisement is to be displayed; the start date/time before which the associated advertisement should not be displayed; and the end date/time after which the associated advertisement should not be displayed; wherein the face time comprises a time period during which a prescribed minimum level of user activity is detected by the user activity monitor function. 40. The software as set forth in Claim 31, wherein the at least one playlist is generated by the at least one playlist server based at least partially on user demographics and/or user/client device behavior. 41. The software as set forth in Claim 31, wherein the at least one playlist server is controlled by a vendor of the software. 42. The software as set forth in Claim 1, wherein the software is e-mail software. 43. The software as set forth in Claim 37, wherein the ad display parameters include, for each of prescribed ones of the at least selected ones of the plurality of stored advertisements, a maximum face time that the associated advertisement is to be displayed each time that it is displayed, wherein the face time comprises a time period during which a prescribed minimum level of user activity is detected by the user activity monitor function. 44. The software as set forth in Claim 37, wherein the ad display parameters include, for each of prescribed ones of the at least selected ones of the plurality of stored advertisements, a maximum cumulative face time that the associated advertisement is to be displayed, wherein the face time comprises a time period during which a prescribed minimum level of user activity is detected by the user activity monitor function. 45. The software as set forth in Claim 37, wherein the ad display parameters include, for each of prescribed ones of the at least selected ones of the plurality of stored advertisements, a maximum face time that the associated advertisement is to be displayed each time that it is displayed, and a maximum cumulative face time that the associated advertisement is to be displayed, wherein the face time comprises a time period during which a prescribed minimum level of user activity is detected by the user activity monitor function. 46. The software as set forth in Claim 43, wherein the user activity comprises any user action that is indicative of user interaction with the software. 47. The software as set forth in Claim 43, wherein the user activity comprises any user action that is indicative of the user viewing a display screen associated with the client device. 48. The software as set forth in Claim 43, wherein the user activity comprises any of the following user actions: movement of a pointer device associated with the client device; and use of an input device associated with the client device. 49. The software as set forth in 43, wherein the user activity comprises any of the following user actions: movement of a mouse associated with the client device; clicking of a mouse button associated with the mouse; and movement of one or more keys of a keyboard associated with the client device. 50. The software as set forth in Claim 44, wherein the user activity comprises any user action that is indicative of user interaction with the software. 51. The software as set forth in Claim 44, wherein the user activity comprises any user action that is indicative of the user viewing a display associated with the client device. 52. The software as set forth in Claim 44, wherein the user activity comprises any of the following user actions: movement of a pointer device associated with the client device; and use of an input device associated with the client device. 53. The software as set forth in 44, wherein the user activity comprises any of the following user actions: movement of a mouse associated with the client device; clicking of a mouse button associated with the mouse; and movement of one or more keys of a keyboard associated with the client device. 54. The software as set forth in Claim 45, wherein the user activity comprises any user action that is indicative of user interaction with the software. 55. The software as set forth in Claim 45, wherein the user activity comprises any user action that is indicative of the user viewing a display screen associated with the client device. 56. The software as set forth in Claim 45, wherein the user activity comprises any of the following user actions: movement of a pointer device associated with the client device; and use of an input device associated with the client device. 57. The software as set forth in 45, wherein the user activity comprises any of the following user actions: movement of a mouse associated with the client device; clicking of a mouse button associated with the mouse; and movement of one or more keys of a keyboard associated with the client device. 58. Software for use on a client device that is configured for communications via a communications network, comprising: a communications function that effects an advertisement download communication link between the client device and an advertisement distribution server system via the communications network, at selected advertisement download times; an advertisement download function that downloads advertisements from the advertisement distribution server system via the advertisement download communication link; an advertisement storage function for storing downloaded advertisements on a storage medium associated with the client device; an advertisement display function that effects display of a plurality of the stored advertisements; and a user activity monitor function that monitors user activity, and that generates user activity data that is indicative of the amount of face time during which at least prescribed ones of the plurality of stored advertisements are displayed. 59. The software as set forth in Claim 58, wherein the face time comprises a time period during which a prescribed minimum level of user activity is detected by the user activity monitor function. 60. The software as set forth in Claim 58, wherein the advertisement distribution server system is controlled by a vendor of the software. 61. The software as set forth in Claim 58, wherein the communications network comprises the Internet. 62. The software as set forth in Claim 58, wherein the software is subsidized by revenues attributable to the downloaded advertisements. 63. The software as set forth in Claim 59, wherein the user activity comprises any user action that is indicative of user interaction with the software. 64. The software as set forth in Claim 59, wherein the user activity comprises any user action that is indicative of the user viewing a display associated with the client device. 65. The software as set forth in Claim 59, wherein the user activity comprises any of the following user actions: movement of a pointer device associated with the client device; and use of an input device associated with the client device. 66. The software as set forth in 59, wherein the user activity comprises any of the following user actions: movement of a mouse associated with the client device; clicking of a mouse button associated with the mouse; and movement of one or more keys of a keyboard associated with the client device. 67. The software as set forth in Claim 58, wherein the user activity comprises any user action that is indicative of user interaction with the software. 68. The software as set forth in Claim 58, wherein the user activity comprises any user action that is indicative of the user viewing a display screen associated with the client device. 69. The software as set forth in Claim 58, wherein the user activity comprises any of the following user actions: movement of a pointer device associated with the client device; and use of an input device associated with the client device. 70. The software as set forth in 58, wherein the user activity comprises any of the following user actions: movement of a mouse associated with the client device; clicking of a mouse button associated with the mouse; and movement of one or more keys of a keyboard associated with the client device. 71. The software as set forth in Claim 1, wherein the advertisement display function effects display of the plurality of stored advertisements when the client device is offline. 72. The software as set forth in Claim 1, wherein the client device is configured for communications with a multiplicity of other client devices via the communications network. 73. The software as set forth in Claim 72, wherein the communications network is the Internet. 74. The software as set forth in Claim 72, wherein the advertisement display function effects display of the plurality of stored advertisements when the client device is offline. 75. The software as set forth in Claim 58, wherein the advertisement display function effects display of the plurality of stored advertisements when the client device is offline. 76. The software as set forth in Claim 58, wherein the client device is configured for communications with a multiplicity of other client devices via the communications network. 77. The software as set forth in Claim 76, wherein the communications network is the Internet. 78. The software as set forth in Claim 76, wherein the advertisement display function effects display of the plurality of stored advertisements when the client device is offline. 79. The software as set forth in Claim 1, further comprising an installer function for installing the software on a computer-readable storage medium. 80. The software as set forth in Claim 1, further comprising an installer function for installing the software on the client device. 81. The software as set forth in Claim 1, further comprising an installer function for installing the software on a computer-readable storage medium associated with the client device. 82. The software as set forth in Claim 58, further comprising an installer function for installing the software on a computer-readable storage medium. 83. The software as set forth in Claim 58, further comprising an installer function for installing the software on the client device. 84. The software as set forth in Claim 58, further comprising an installer function for installing the software on a computer-readable storage medium associated with the client device. |
METHOD AND SYSTEM FOR DISTRIBUTING ADVERTISEMENTS TO CLIENT DEVICES COPYRIGHT NOTICE A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by any one of the patent document or patent disclosure as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever. BACKGROUND OF THE INVENTION The present invention relates generally to the field of electronic mail ("email") software and systems. More particularly, the present invention is related to advertiser-supported e-mail software for delivering advertisements to client computers having this advertiser-supported e-mail software installed thereon. This application is based on Provisional Patent Application No. 60/ 169,622, which was filed on December 8,1999. This Provisional PatentApplication is incorporated herein by reference in its entirety. Electronic mail ("e-mail") has become a ubiquitous form of communication in recent years. In general, e-mail works as follows. E-mail software is installed on a client device, e. g., a personal computer (PC), equipped or configured for communications with a multiplicity of other client devices via a communications network. Access to the communications network can be provided by a communications network service provider, e. g., an InternetService Provider (ISP) and/or a proprietary network e-mail service provider, with whom the user establishes one or more e-mail accounts, each identified by a unique e-mail address, e. g., president@whitehouse. gov. The e-mail software, e. g., the e-mail client, enables a user of the client device to compose e-mail messages, to send e-mail messages to other client devices via the communications network, and to read e-mail messages received from other client devices via the communications network. A user can send e-mail messages to multiple recipients at a time, which capability is sometimes referred to using a mailing list or, in extreme cases, bulk mailing. The typical email client supports Post Office Protocol Version 3 (POP3), Simple MailTransfer Protocol (SMTP), Internet Mail Access Protocol, Version 4 (IMAP4), and/or Multipurpose Internet Mail Extensions (MIME). Each ISP and each proprietary network e-mail service provider independently operates and controls an e-mail communication system (or, simply,"e-mail system"). These independently-operated e-mail systems are bidirectional store-and-forward communication systems that are interconnected to one another via the Internet. Each e-mail system generally includes a number of e-mail servers that store inbound and outbound e-mail messages and then forward them, route them, or simply make them available to the users/intended recipients. Different e-mail systems are operated and controlled by independent control entities. With the advent of the Internet, the user is not restricted to a single system providing both an incoming e-mail server (or server cluster) and an outgoing e-mail server (cluster), i. e., both the incoming and outgoing e-mail servers under the control of a single entity. Most e-mail clients, other than proprietary e-mail systems such as AOL and JUNO, can be configured to receive e-mail from an incoming e-mail server (cluster) controlled by a first entity and an outgoing email server (cluster) controlled by a second, totally independent entity. It will be appreciated that most casual email users download from and upload to respective servers operated by a single entity. Generally, when a user desires to send e-mail messages, or to check for received messages (which operations can occur automatically according to a prescribed schedule), the e-mail software is activated. Upon being activated, the e-mail software: effects a connection or communications session with the host ISP or e mail service provider via a prescribed communication link by invoking a prescribed communications mechanism, e. g., a dial-up modem, an ISDN connection, a DSL or ADSL connection, etc.; electronically transmits or transports any e-mail messages desired to be sent to the e-mail server system operated by the host ISP or e-mail service provider, e. g., via an SMTP server; (receives any inbound e-mail messages forwarded to the client device by the host ISP or e-mail service provider, e. g., via a POP3 or IMAP4 server; and (stores any received e-mail messages in a prescribed memory location within the client device, e. g., at either the default location established by the e-mail client or a user-selected location. Exemplary e-mail software is the commercially available e-mail software marketed by the present assignee, QUALCOMM INCORPORATED, under the registered trademarks EUDORA PRO@ and EUDORA LIGHT) (hereinafter sometimes referred to generically as"Eudora"). In general, the EUDORA PRO e-mail software provides the user with a"full feature set,"and the EUDORALIGHT e-mail software provides the user with a"reduced feature set"that is a subset of the"full feature set"provided by the EUDORA PRO e-mail software.The EUDORA PRO e-mail software (the previous version of which is referred to as"EP4"in this document) must be paid for by the user (or by someone else on behalf of the user), and can thus be regarded as"Payware", whereas theEUDORA LIGHT e-mail software is provided free of charge to registered users, and thus, can be regarded as"Freeware."Each of the client devices that has any version of Eudora installed thereon can be regarded as a"Eudora client."Presently, there is a very large installed base of Eudora clients. The present assignee, QUALCOMM INCORPORATED, has recently released a new version of its popular EUDORA e-mail software that is popularly known as EUDORA Adware (hereinafter sometimes referred to simply as"Adware"). This new Adware version of Eudora is contained within, i. e., is an integral part of, a new Eudora software product that contains the previously-referenced Payware and Freeware versions of Eudora. In general, each version of Eudora contained within this Eudora product release constitutes a separate operating mode of a single software product. Advantageously, theAdware Version of Eudora Pro@ can be activated or switched between modes either automatically, in accordance with prescribed criteria or conditions, or manually, in accordance with prescribed user actions, e. g., registration, payment, selection, etc. This new Adware version of Eudora and the multimoded Eudora e-mail software product that contains the same were motivated by a desire on the part of the present assignee to provide users with the"full feature set"afforded by the Payware version of Eudora free of charge to the users, by means of distributing advertisements paid for by advertisers to Eudora clients, thereby effectively shifting the source of payment/revenue from the users to the advertisers. Thus, this new Eudora software product can be regarded as"advertiser-supported"or"advertiser-subsidized"or simply "sponsored"software. Most Internet service providers (ISPs) and e-mail service providers charge users a flat monthly subscription fee, although some providers still charge users based on usage, e. g., additional charges for on-line time beyond a prescribed level. However, there exists a population of users who desire to have basic e-mail service, but who do not require or want to pay for Internet access. A few companies have addressed the needs of this market segment by providing free e-mail service to users/subscribers who agree to receive advertisements along with their received e-mail messages. In this way, the advertisers support or sponsor the free e-mail service. Based upon the relevant literature, it appears that the first company to propose and offer such a free e-mail service was FreeMark Communications (a. k. a."ProductView Interactive"). The FreeMark system and method for providing free e-mail service is disclosed in PCT published patent applicationInternational Publication Number WO 96/24213, having a priority date ofFebruary 1,1995, based on U. S. Application Serial Number 08/382,118, naming as inventors Marv Goldschmitt and Robert A. Young. The disclosure of this published PCT patent application is expressly incorporated herein by reference.In short, this free e-mail system was subsidized by advertisers that appended advertisements as attachments, e. g., graphical interchange format (GIF) image file attachments, to e-mail messages transmitted to subscribers. The advertisements were stored on the subscriber's computer for viewing while the subscriber was off-line reading the received e-mail messages. In some of their promotional literature, FreeMark referred to the appended advertisements as "postage stamps". In FreeMark's literature, each message received by the subscriber was depicted as an envelope bearing a postage stamp; the postage stamp was the advertisement. Subsequently, a company by the name of Juno Online Services, L. P.(hereinafter simply"JUNO") introduced a free e-mail service. The JUNO system and method for providing free e-mail service is disclosed in U. S. Patent Number 5,809,242, which issued to Marsh et al. on December 8,1998, the disclosure of which is also expressly incorporated herein by reference. With the proprietaryJUNO e-mail system, a plurality of advertisements are downloaded to subscribers when they connect to the proprietary JUNO e-mail server system to send and/or receive e-mail messages, with the advertisements being stored locally on the subscriber's computer for display when the subscriber is off-line composing or reading e-mail messages, i. e., when the subscriber activates Juno e-mail software previously installed on the subscriber's computer. The locally stored advertisements are displayed under the control of a display scheduler resident on the subscriber's computer, to thereby enable the advertisements to be rotated or changed in a dynamic manner. This results in a continuouslychanging display of advertisements being presented to the subscriber. Various other aspects and features of the proprietary JUNO e-mail system are disclosed in U. S. Patent Number 5,838,790, which issued to McAuliffe et al on November 17,1998, and in U. S. Patent Number 5,848,397, which issued to Marsh et al onDecember 8,1998; the disclosures of both of these patents are also expressly incorporated herein by reference. With both the FreeMark and JUNO proprietary free e-mail systems, both the advertisements and the e-mail messages are stored on a single e-mail system (e. g., JUNO stores both on a single, unique server which is assigned (bound) to the user when he/she first signs up for service), and are distributed to subscribers under the direction of a common control entity that is controlling all part of the e-mail system. While this may be a desirable system architecture for providing free e-mail service, it is not a suitable system architecture for a system whose purpose is to distribute advertiser-supported e-mail software that is email system-independent, i. e., which is not tied to a particular proprietary email service provider but, rather, supports public standards, e. g., POP3, SMTP,IMAP4, etc. Moreover, the free e-mail system architecture is not suitable for the many people who maintain multiple e-mail accounts, e. g., business and personal e-mail accounts. As mentioned previously, the present inventors were motivated by a desire to provide a system and method for distributing advertisements to Eudora clients in order to generate advertising revenues that would allow a fully-featured version of the Eudora e-mail software to be widely distributed free of charge to end-users. Moreover, the present inventors were motivated by a desire to provide e-mail software that is both universal and email system-independent, i. e., it is not tied to any particular proprietary e-mail service or service provider. Accordingly, the present inventors have developed a novel multi-moded Eudora e-mail software product that contains the Payware, Freeware and Adware, and have also devised a novel system and method for distributing advertisements to clients equipped with this new software product. As will become fully apparent hereinafter, the purpose and architecture of this novel system are radically different than that of the proprietary FreeMark and JUNO e-mail systems. In this regard, the multi-moded Eudora e-mail software product, and the novel system and method for distributing advertisements to clients equipped with this new software product, embraces a number of different inventions that will become fully apparent from the following disclosure and the documents referenced therein. SUMMARY OF THE INVENTION Based on the above and foregoing, it can be appreciated that there presently exists a need in the art for a subsidized e-mail client which overcomes the above-described deficiencies. The present invention was motivated by a desire to overcome the drawbacks and shortcomings of the presently available technology, and thereby fulfill this need in the art. In one of its aspects, the present invention encompasses e-mail software which incorporates an automatic advertisement download function for automatically downloading advertisements to be displayed when the e-mail software is activated, for the purpose of subsidizing the full e-mail software product (e. g., to provide a"Freeware"version of the e-mail software product to end-users), wherein the e-mail software is e-mail system-independent.Preferably, the e-mail software is a stand-alone product which is universal, i. e., works in conjunction with virtually any e-mail service provider or e-mail system, including those service which comply with open standards. The present invention also encompasses a system and method for automatically distributing advertisements to a multiplicity of client devices which have this email software installed thereon. According to one aspect, the present invention provides an e-mail client for receiving and sending e-mail messages to at least one of a plurality of e-mail servers operated by respective e-mail operators, wherein the e-mail client receives at least one ad from an ad server operated by a control entity different than the control entity operating the one or more e-mail systems. According to another aspect, the present invention provides a recording medium storing e-mail client software for instantiating an e-mail client which receives e-mail messages from and sends e-mail messages to at least one of a plurality of e-mail servers operated by their respective e-mail operators, wherein the e-mail client automatically receives ads from an ad server which operates independent of the e-mail servers. According to still another aspect, the present invention encompasses a method of operating an e-mail client, provided by an ad server operator, compatible with a plurality of independently operated e-mail servers, including ones based on open e-mail standards. Preferably, the method includes steps for periodically at least one of sending and receiving e-mail from selected ones of the e-mail servers, periodically receiving ads from the ad server operator, and displaying the received ads responsive to instructions provided by the ad server operator. According to a still further aspect, the present invention provides an email system including an incoming e-mail server storing incoming e-mail messages addressed to a plurality of users, an outgoing e-mail server for forwarding or routing outgoing e-mail messages generated by the users, and an ad server operating independently of the e-mail server, and a plurality of e-mail clients operated by respective users. Preferably, each of the e-mail clients checks for respective e-mail messages stored on the incoming e-mail server, transmits any outgoing e-mail messages stored on the e-mail client to the outgoing e-mail server, and downloads available ads from the ad server while the e-mail client is online. In one aspect, the present invention provides software for use on a client device that is configured for communications via a communications network, including a communications function that effects an advertisement download communication link between the client device and an advertisement distribution server system via the communications network, at selected advertisement download times, an advertisement download function that downloads advertisements from the advertisement distribution server system via the advertisement download communication link, an advertisement storage function for storing downloaded advertisements on a storage medium associated with the client device, a user activity monitor function that monitors user activity, and an advertisement display function that effects display of a plurality of the stored advertisements, wherein at least selected ones of the plurality of stored advertisements are displayed for a duration that is at least partially based on the monitored user activity. In another aspect, the present invention provides software for use on a client device that is configured for communications via a communications network, including a communications function that effects an advertisement download communication link between the client device and an advertisement distribution server system via the communications network, at selected advertisement download times, an advertisement download function that downloads advertisements from the advertisement distribution server system via the advertisement download communication link, an advertisement storage function for storing downloaded advertisements on a storage medium associated with the client device, an advertisement display function that effects display of a plurality of the stored advertisements, and a user activity monitor function that monitors user activity, and that generates user activity data that is indicative of the amount of face time during which at least prescribed ones of the plurality of stored advertisements are displayed. Many other features, aspects, uses, applications, advantages, modifications, variations, and alternative embodiments of the foregoing inventive concepts will become apparent from the technical documentation that follows. This technical documentation constitutes an integral part of this application for all purposes. Moreover, additional inventive concepts that have not been discussed above are disclosed in this technical documentation, and it is intended that this application cover such additional inventive concepts. Furthermore, certain terms that have been used in the foregoing and following descriptions of the present invention are defined as follows: <tb> TERM <SEP> DESCRIPTION<tb> Advertisement <SEP> (s) <SEP> This <SEP> term <SEP> is <SEP> intended <SEP> to <SEP> broadly <SEP> encompass <SEP> any <SEP> secondary<tb> content <SEP> that <SEP> is <SEP> delivered <SEP> or <SEP> distributed <SEP> to <SEP> client <SEP> devices <SEP> in<tb> addition <SEP> to <SEP> the <SEP> primary <SEP> content, <SEP> e. <SEP> g., <SEP> e-mail <SEP> messages,<tb> which <SEP> the <SEP> software <SEP> product <SEP> instantiated <SEP> by <SEP> the <SEP> client <SEP> device<tb> is <SEP> designed <SEP> to <SEP> receive, <SEP> transmit, <SEP> process, <SEP> display, <SEP> and/or<tb> utilize. <SEP> For <SEP> example, <SEP> this <SEP> term <SEP> is <SEP> intended <SEP> to <SEP> cover, <SEP> without<tb> limitation, <SEP> paid <SEP> advertisements, <SEP> community <SEP> service<tb> messages, <SEP> public <SEP> service <SEP> announcements, <SEP> system<tb> information <SEP> messages <SEP> or <SEP> announcements, <SEP> cross-promo<tb> spots, <SEP> artwork, <SEP> and <SEP> any <SEP> other <SEP> graphical, <SEP> multimedia, <SEP> audio,<tb> video, <SEP> text, <SEP> or <SEP> other <SEP> secondary <SEP> digital <SEP> content.<tb> Nevertheless, <SEP> it <SEP> will <SEP> be <SEP> recognized <SEP> that <SEP> the <SEP> primary <SEP> purpose<tb> of <SEP> the <SEP> presently <SEP> contemplated <SEP> commercial <SEP> embodiment <SEP> of<tb> the <SEP> present <SEP> invention <SEP> is <SEP> to <SEP> distribute <SEP> paid <SEP> advertisements,<tb> and <SEP> thus, <SEP> in <SEP> accordance <SEP> with <SEP> the <SEP> preferred <SEP> embodiment <SEP> of<tb> the <SEP> present <SEP> invention, <SEP> the <SEP> advertisements <SEP> will <SEP> be<tb> exclusively, <SEP> or <SEP> at <SEP> least <SEP> primarily, <SEP> paid <SEP> advertisements.<tb> <tb>Client <SEP> Device <SEP> This <SEP> term <SEP> is <SEP> intended <SEP> to <SEP> broadly <SEP> encompass <SEP> any <SEP> device <SEP> that<tb> has <SEP> digital <SEP> data <SEP> processing <SEP> and <SEP> output, <SEP> e. <SEP> g., <SEP> display,<tb> capabilities, <SEP> including, <SEP> but <SEP> not <SEP> limited <SEP> to, <SEP> desktop<tb> computers, <SEP> laptop <SEP> computers, <SEP> hand-held <SEP> computers,<tb> notebook <SEP> computers, <SEP> Personal <SEP> Digital <SEP> Assistants <SEP> (PDAs),<tb> palm-top <SEP> computing <SEP> devices, <SEP> intelligent <SEP> devices,<tb> information <SEP> appliances, <SEP> video <SEP> game <SEP> consoles, <SEP> information<tb> kiosks, <SEP> wired <SEP> and <SEP> wireless <SEP> Personal <SEP> Communications<tb> Systems <SEP> (PCS) <SEP> devices, <SEP> smart <SEP> phones, <SEP> intelligent <SEP> cellular<tb> telephones <SEP> with <SEP> built-in <SEP> web <SEP> browsers, <SEP> intelligent <SEP> remote<tb> controllers <SEP> for <SEP> cable, <SEP> satellite, <SEP> and/or <SEP> terrestrial <SEP> broadcast<tb> television, <SEP> and <SEP> any <SEP> other <SEP> device <SEP> that <SEP> has <SEP> the <SEP> requisite<tb> capabilities.<tb>Information <SEP> This <SEP> term <SEP> is <SEP> intended <SEP> to <SEP> broadly <SEP> encompass <SEP> any <SEP> intelligible<tb> form <SEP> of <SEP> information <SEP> which <SEP> can <SEP> be <SEP> presented <SEP> by <SEP> a <SEP> client<tb> device, <SEP> i. <SEP> e., <SEP> an <SEP> information <SEP> client <SEP> device, <SEP> including, <SEP> without<tb> limitation, <SEP> text, <SEP> documents, <SEP> files, <SEP> graphical <SEP> objects, <SEP> data<tb> objects, <SEP> multimedia <SEP> content, <SEP> audio/sound <SEP> files, <SEP> video <SEP> files,<tb> MPEG <SEP> files, <SEP> JPEG <SEP> files, <SEP> GIF <SEP> files, <SEP> PNG <SEP> files, <SEP> HTML<tb> documents, <SEP> applications, <SEP> formatted <SEP> documents <SEP> (e. <SEP> g., <SEP> word<tb> processor <SEP> and/or <SEP> spreadsheet <SEP> documents <SEP> or <SEP> files), <SEP> MP3<tb> files, <SEP> animations, <SEP> photographs, <SEP> and <SEP> any <SEP> other <SEP> document,<tb> file, <SEP> digital, <SEP> or <SEP> multimedia <SEP> content <SEP> that <SEP> can <SEP> be <SEP> transmitted<tb> over <SEP> a <SEP> communications <SEP> network <SEP> such <SEP> as <SEP> the <SEP> Internet.<tb>E-mail <SEP> Messages <SEP> This <SEP> term <SEP> is <SEP> intended <SEP> to <SEP> broadly <SEP> encompass <SEP> the <SEP> e-mail<tb> message <SEP> and <SEP> any <SEP> attachments <SEP> thereto, <SEP> including, <SEP> without<tb> limitation, <SEP> text, <SEP> documents, <SEP> files, <SEP> graphical <SEP> objects, <SEP> data<tb> objects, <SEP> multimedia <SEP> content, <SEP> audio/sound <SEP> files, <SEP> video <SEP> files,<tb> MPEG <SEP> files, <SEP> JPEG <SEP> files, <SEP> GIF <SEP> files, <SEP> PNG <SEP> files, <SEP> HTML<tb> documents, <SEP> applications, <SEP> formatted <SEP> documents <SEP> (e. <SEP> g., <SEP> word<tb> processor <SEP> and/or <SEP> spreadsheet <SEP> documents <SEP> or <SEP> files), <SEP> MP3<tb> files, <SEP> animations, <SEP> photographs, <SEP> and <SEP> any <SEP> other <SEP> document,<tb> file, <SEP> digital, <SEP> or <SEP> multimedia <SEP> content <SEP> that <SEP> can <SEP> be <SEP> transmitted<tb> over <SEP> a <SEP> communications <SEP> network <SEP> such <SEP> as <SEP> the <SEP> Internet.<tb> <tb>Software <SEP> This <SEP> term <SEP> is <SEP> intended <SEP> to <SEP> broadly <SEP> encompass <SEP> the <SEP> developer<tb> Provider <SEP> (or <SEP> developers), <SEP> sellers, <SEP> distributors, <SEP> etc., <SEP> of <SEP> the <SEP> multi-mode<tb> software <SEP> products <SEP> (s) <SEP> installed <SEP> on <SEP> the <SEP> client <SEP> device.<tb>Memory <SEP> This <SEP> term <SEP> is <SEP> intended <SEP> to <SEP> broadly <SEP> encompass <SEP> any <SEP> device<tb> capable <SEP> of <SEP> storing <SEP> and/or <SEP> incorporating <SEP> computer <SEP> readable<tb> code <SEP> for <SEP> instantiating <SEP> the <SEP> client <SEP> device <SEP> referred <SEP> to<tb> immediately <SEP> above. <SEP> Thus, <SEP> the <SEP> term <SEP> encompasses <SEP> all <SEP> types<tb> of <SEP> recording <SEP> medium, <SEP> e. <SEP> g., <SEP> a <SEP> CD-ROM, <SEP> a <SEP> disk <SEP> drive <SEP> (hard <SEP> or<tb> soft), <SEP> magnetic <SEP> tape, <SEP> and <SEP> recording <SEP> devices, <SEP> e. <SEP> g., <SEP> memory<tb> devices <SEP> including <SEP> DRAM, <SEP> SRAM, <SEP> EEPROM, <SEP> FRAM, <SEP> and<tb> Flash <SEP> memory. <SEP> It <SEP> should <SEP> be <SEP> noted <SEP> that <SEP> the <SEP> term <SEP> is <SEP> intended<tb> to <SEP> include <SEP> any <SEP> type <SEP> of <SEP> device <SEP> which <SEP> could <SEP> be <SEP> deemed<tb> persistent <SEP> storage. <SEP> To <SEP> the <SEP> extent <SEP> that <SEP> an <SEP> Application<tb> Specific <SEP> Integrated <SEP> Circuit <SEP> (ASIC) <SEP> can <SEP> be <SEP> considered <SEP> to<tb> incorporate <SEP> instructions <SEP> for <SEP> instantiating <SEP> a <SEP> client <SEP> device, <SEP> an<tb> ASIC <SEP> is <SEP> also <SEP> considered <SEP> to <SEP> be <SEP> within <SEP> the <SEP> scope <SEP> of <SEP> the <SEP> term<tb> "memory."<tb> BRIEF DESCRIPTION OF THE DRAWINGS These and various other features and aspects of the present invention will be readily understood with reference to the following detailed description taken in conjunction with the accompanying drawings, in which like or similar numbers are used throughout, and in which: Fig. 1 is a high-level diagram of a computer system including a plurality of client devices connected to a plurality of independently-operated server devices via a network, which computer system is suitable for implementing various functions according to the present invention; Fig. 2 is a high-level diagram of a representative one of the client devices illustrated in Fig. 1; Figs. 3A and 3B illustrate alternative and non-limiting placement of ads in the main navigation screen of an exemplary e-mail software application according to the present invention; Fig. 4A depicts state transitions when a version of the software is installed by one of a new user, an old user, and an EP4 user; Fig. 4B illustrates a dialog box associated with the state flow diagram illustrated in Fig. 4A; Fig. 5A illustrates an exemplary state flow diagram of a process by which the Ad user becomes a registered Ad user while Figs. 5B through 5G illustrate several dialog boxes associated with Fig. 5A; Fig. 6A illustrates an exemplary state flow diagram of a process by which a Free user can become a registered Free user while Fig. 6B illustrates an additional dialog box associated with Fig. 6A; Fig. 7A illustrates an exemplary state flow diagram of a process by which all users are reminded to update the software according to the present invention while Fig. 7B depicts an exemplary dialog box corresponding to anUpdate Nag; Fig. 8 illustrates an exemplary state flow diagram of a process by which aBox user can become a Paid user; Fig. 9 illustrates an exemplary state flow diagram of a process by which the Paid User becomes an Unpaid user; Fig. 10 illustrates an exemplary Nag Window display timeline for MacOS versions of the Eudora e-mail software according to an exemplary embodiment of the present invention; Fig. 11 illustrates a Nag Schedule employed by the software according to the present invention; Fig. 12A is a simulated screen capture of a link history window employed in an exemplary software embodiment of the present invention whileFig. 12B is a dialog box reminding the user that the e-mail client according to the present invention is off-line; Fig. 13A illustrates the assumptions used in determining the impact of ad transmission on e-mail program operations while Fig. 13B is a table listing the bandwidth requirements in terms of subscriber base versus the number of new ads to be downloaded each day; Fig. 14 is a state flow diagram of an exemplary ad fetch process according to the present invention; Figs. 15A-15H collectively illustrate an algorithm controlling ad scheduling in an exemplary embodiment according to the present invention; Figs. 16A and 16B illustrate parameter variations in alternative modes of ad display possible in an exemplary embodiment according to the present invention; Figs. 17A through 17C illustrate additional dialog boxes which advantageously can be generated by the e-mail client software according to one aspect of the present invention; Fig. 18A illustrates an exemplary dialog box associated with auditing the operation of the Adware software according to the present invention while Figs. 18B through 18E list useful parameters for auditing the software's performance; Fig. 19 is a table summarizing the features of a plurality of web pages that advantageously can be employed in conjunction with an exemplary e-mail system according to one aspect of the present invention; Fig. 20 is a class diagram illustrating the mapping of XML code to objects and the task flow when another exemplary embodiment according to the present invention is operating in accordance with doPost methodology; Figs. 21A and 21B collectively constitute a pseudo code listing which can be employed by the server 302 in Fig. 1 in generating a PlayList in accordance with the present invention; Fig. 22 is another class diagram illustrating handling of requests and writes between a server and at least one of the client computers depicted in Fig. 1; and Fig. 23 illustrates database accesses in accordance with another aspect of the present invention. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS Illustrative embodiments and exemplary applications will now be described with reference to the accompanying drawings to disclose the advantageous teachings of the present invention. While the present invention is described herein with reference to illustrative embodiments for particular applications, it should be understood that the invention is not limited thereto. Those having ordinary skill in the art and access to the teachings provided herein will recognize additional modifications, applications, and embodiments within the scope thereof and additional fields in which the present invention would be of significant utility.. Referring now to specific drawings, Fig. 1 illustrates an exemplary system configuration 10 which is suitable for carrying out the functions according to representative embodiments of the present invention. Although the representative embodiment will be generally described with respect to an electronic mail (e-mail) system where a number of users can create, send, receive and read e-mail messages, the present invention is not so limited. For example, the present invention is equally applicable to a personal digital assistant (PDA) incorporating specialized software for receiving stock quotations via a wireless network. Thus, the principles of the present invention should not be regarded as limited solely to e-mail systems; the principles of the present invention apply to on-line services where a provider, e. g., a software provider, desires to make its software available to users using a variety of payment options for a core set of software functions. As shown in Fig. 1, the system 10 includes a plurality of client computers 100a, 100b,..., 100n, where n denotes any positive integer. Preferably, each of the client computers generally denoted 100 can be either a workstation or a personal computer executing a client program according to the present invention. In an exemplary case, the client computers 100a, 100b,..., 100n advantageously can be connected to a plurality of servers 301-304, which servers will be described in greater detail below, via a network 200, e. g., theInternet. Alternatively, the network 200 can be one of a local area network (LAN), a wide area network (WAN), an Intranet, or a wireless network, or some combination thereof. It will be appreciated that Fig. 1 illustrates a non-limiting exemplary system; and number of clients can be connected to any number of servers. Fig. 2 illustrates in further detail the hardware configuration of an exemplary one of the client computers 100a, 100b,..., 100n illustrated in Fig. 1.In the representative embodiment, the client computer 100a includes a central processing unit 209 for executing computer programs (including the client program according to one exemplary embodiment of the present invention) and managing and controlling the operation of the client computer 100a. A storage device 205, such as a floppy disk drive, is coupled to the central processing unit 209 for, e. g., reading and writing data and computer programs to and from removable storage media such as floppy disks. Storage device 206, coupled to the central processing unit 209, also provides a mechanism for storing computer programs and data. Storage device 206 is preferably a hard disk having a high storage capacity. A dynamic memory device 207 such as a RAM, is also coupled to the central processing unit 209. It will be noted that storage devices 205 and 206, as well as dynamic memory device 207, are non-limiting examples of a memory, which term was defined previously. The client computer 100a includes typical input/output devices, such as, for example, a keyboard 203, a mouse 204, a monitor 208, and a communications device 201. It will be appreciated that the communications device advantageously can be a modem, an ethernet interface card, etc. Referring again to Fig. 1, each of the client computers 100a, 100b,..., 100n can selectively communicate with any of the servers, e. g., servers 301-304, via the network 200. In the computer system 10 depicted in Fig. 1, each of the servers performs a specialized function. In an exemplary case, server 301 performs a registration function, i. e., accepts registration information from each client computer (as discussed in greater detail below), server 302 providesPlayLists to the client computers 100a, 100b,..., 100n, server 303 provides the advertisements designated in the PlayLists, and server 304 acts as a conventional e-mail system server system, i. e., provides both the incoming e-mail server and the outgoing e-mail server. It should be mentioned that only servers 301 and 302 need actually be under the direct control of the software provider, e. g.,QUALCOMM INCORPORATED in the preferred embodiment, although server 303 advantageously may be under the control of the software provider as well.It should also be mentioned that the reference to software should not be construed as limited to disk based software; the term"software"should be broadly interpreted as instructions carried out by a processor, whether these instructions are read from a dynamic memory or stored as firmware in an read only memory (ROM) or other variants of such a device. According to one aspect of the present invention, the"software" advantageously can be provided as a single binary (per client device) file containing the software, e. g., the Eudora software, which can be employed by all users. This binary file will operate in one of three major modes of operation:Payware; Freeware; and Adware. In the Payware mode of operation, the user must pay the software provider to use the software. Freeware is free for all to use, but has fewer features than either Payware or Adware. Preferably, Payware users will prove their payment by a registration code that the software provider will provide to them at time of payment. This code will be self-validating, and contain enough data to identify what version (s) the user is entitled to operate.It should be noted that users of the Payware version of Eudora will be entitled to all versions of Eudora that are produced during the calendar year following their payment. The software preferably polls a predetermined site, e. g., a site maintained by QUALCOMM INCORPORATED, on a periodic basis in order to determine if an update for the software is available; if an update is available, the software advantageously can present the user with a small web page of options for obtaining the software update, as discussed in greater detail below. It will be noted that Adware has all the features of Payware, but does not require payment from the user. What Adware does require is that the user display and view ads, which the user will download from the software provider's site and/or one or more sites designated by the software provider. It will also be noted that the initial state of the software is Adware. In an exemplary preferred embodiment, each client computer downloads ads from the ad server 303 unobtrusively and without drawing significant bandwidth, as discussed in greater detail below. Moreover, the ads advantageously can be displayed in a manner that doesn't significantly detract from the use of the software, e. g., Eudora. Figs. 3A and 3B illustrate advertisements integrated into the main screen of the exemplary Eudora e-mail software. Some of the terminology employed in describing the functions and novel features of exemplary embodiments of the present invention was presented above. Additional terminology which facilitates a full understanding of the present invention in terms of the Eudora software is presented immediately below. Applications <SEP> QUALCOMM <SEP> INCORPORATED <SEP> has <SEP> several <SEP> versions <SEP> of <SEP> the<tb> Eudora <SEP> software, <SEP> including:<tb> EP4 <SEP> Eudora <SEP> Pro <SEP> 4. <SEP> x, <SEP> either <SEP> Windows <SEP> or<tb> Macintosh.<tb>Eudora <SEP> The <SEP> new <SEP> three-modal <SEP> version <SEP> of <SEP> Eudora,<tb> running <SEP> in <SEP> any <SEP> of <SEP> its <SEP> modes.<tb> <tb>Payware <SEP> Eudora <SEP> running <SEP> in <SEP> full-feature <SEP> mode, <SEP> after<tb> the <SEP> user <SEP> has <SEP> paid.<tb>Freeware <SEP> Eudora <SEP> running <SEP> in <SEP> reduced-feature <SEP> mode.<tb>Adware <SEP> Eudora <SEP> running <SEP> in <SEP> full-feature <SEP> mode <SEP> with<tb> ads.<tb>Paid <SEP> App <SEP> Any <SEP> version <SEP> of <SEP> Payware <SEP> to <SEP> which <SEP> the <SEP> user's<tb> registration <SEP> entitles <SEP> him/her.<tb>Unpaid <SEP> App <SEP> Any <SEP> version <SEP> of <SEP> Payware <SEP> newer <SEP> than <SEP> that <SEP> to<tb> which <SEP> the <SEP> user <SEP> is <SEP> registered <SEP> and <SEP> entitled <SEP> to.<tb>Old <SEP> Eudora <SEP> Eudora <SEP> versions <SEP> prior <SEP> to <SEP> Eudora <SEP> Pro <SEP> 4. <SEP> x.<tb>User <SEP> States <SEP> A <SEP> user <SEP> state <SEP> is <SEP> the <SEP> most <SEP> basic <SEP> concept <SEP> to <SEP> understanding <SEP> how<tb> the <SEP> various <SEP> modes <SEP> of <SEP> the <SEP> application <SEP> are <SEP> interrelated. <SEP> The<tb> user <SEP> state <SEP> determines <SEP> how <SEP> the <SEP> program <SEP> treats <SEP> the <SEP> user. <SEP> The<tb> states <SEP> are <SEP> defined <SEP> as <SEP> follows:<tb> EP4 <SEP> User <SEP> A <SEP> user <SEP> of <SEP> EP4 <SEP> who <SEP> has <SEP> not <SEP> registered <SEP> via <SEP> the<tb> old <SEP> (non-Adware) <SEP> registration <SEP> process.<tb>Registered <SEP> A <SEP> registered <SEP> user <SEP> of <SEP> EP4.<tb>EP4 <SEP> User<tb> New <SEP> User <SEP> A <SEP> user <SEP> using <SEP> Eudora <SEP> for <SEP> the <SEP> first <SEP> time, <SEP> but<tb> who <SEP> has <SEP> not <SEP> obtained <SEP> a <SEP> boxed <SEP> copy, <SEP> e. <SEP> g.,<tb> bundled <SEP> with <SEP> a <SEP> newly <SEP> purchased <SEP> computer<tb> system, <SEP> etc.<tb> <tb>Payware <SEP> A <SEP> user <SEP> who <SEP> has <SEP> paid <SEP> for <SEP> Eudora, <SEP> entered<tb> User <SEP> his/her <SEP> registration <SEP> code, <SEP> and <SEP> is <SEP> using <SEP> a<tb> version <SEP> of <SEP> Eudora <SEP> to <SEP> which <SEP> he/she <SEP> is <SEP> entitled.<tb>Box <SEP> User <SEP> This <SEP> is <SEP> a <SEP> user <SEP> who <SEP> has <SEP> been <SEP> given <SEP> their<tb> RegCode <SEP> by <SEP> an <SEP> installer, <SEP> either <SEP> from <SEP> the <SEP> box<tb> product <SEP> or <SEP> from <SEP> an <SEP> EP4 <SEP> updater, <SEP> and <SEP> whose<tb> registration <SEP> information <SEP> is <SEP> therefore<tb> unknown.<tb>Free <SEP> User <SEP> A <SEP> user <SEP> who <SEP> has <SEP> chosen <SEP> to <SEP> use <SEP> Freeware <SEP> but<tb> who <SEP> has <SEP> not <SEP> entered <SEP> a <SEP> Freeware <SEP> registration<tb> code.<tb>Adware <SEP> A <SEP> user <SEP> who <SEP> is <SEP> using <SEP> the <SEP> Adware <SEP> version <SEP> that<tb> User <SEP> displays <SEP> ads.<tb>Registered <SEP> A <SEP> Freeware <SEP> ("Free") <SEP> user <SEP> who <SEP> has <SEP> entered <SEP> a<tb> Freeware <SEP> Freeware <SEP> registration <SEP> code.<tb>User<tb> Registered <SEP> An <SEP> Adware <SEP> user <SEP> who <SEP> has <SEP> entered <SEP> an <SEP> Ad<tb> Adware <SEP> registration <SEP> code.<tb>User<tb> Deadbeat <SEP> A <SEP> former <SEP> Adware <SEP> user <SEP> who <SEP> has <SEP> been <SEP> shut <SEP> off<tb> User <SEP> due <SEP> to <SEP> Eudora's <SEP> failure <SEP> to <SEP> receive <SEP> ads <SEP> (or <SEP> less<tb> than <SEP> a <SEP> prescribed <SEP> minimum <SEP> number <SEP> of <SEP> ads).<tb>Windows <SEP> and <SEP> Several <SEP> windows <SEP> and <SEP> dialogs <SEP> are <SEP> used <SEP> in <SEP> the <SEP> process. <SEP> A <SEP> fuller<tb> Dialogs <SEP> description <SEP> of <SEP> these <SEP> will <SEP> be <SEP> given <SEP> later, <SEP> but <SEP> the <SEP> major <SEP> ones <SEP> are<tb> briefly <SEP> described <SEP> immediately <SEP> below:<tb> Intro <SEP> Dialog <SEP> A <SEP> dialog <SEP> presented <SEP> to <SEP> new <SEP> users <SEP> explaining<tb> the <SEP> software <SEP> options <SEP> to <SEP> new <SEP> users.<tb>Registration <SEP> A <SEP> window <SEP> presented <SEP> to <SEP> the <SEP> user <SEP> every <SEP> so<tb> Nag <SEP> often <SEP> to <SEP> suggest <SEP> that <SEP> the <SEP> user <SEP> register <SEP> his/her<tb> software.<tb>Full-Feature <SEP> A <SEP> window <SEP> presented <SEP> to <SEP> Freeware <SEP> users<tb> Nag <SEP> requesting <SEP> them <SEP> to <SEP> try <SEP> Eudora <SEP> Pro <SEP> again.<tb>Free <SEP> A <SEP> dialog <SEP> that <SEP> tells <SEP> the <SEP> user <SEP> the <SEP> features <SEP> that<tb> Downgrade <SEP> will <SEP> no <SEP> longer <SEP> be <SEP> available <SEP> to <SEP> him/her <SEP> if <SEP> they<tb> switch <SEP> to <SEP> Freeware, <SEP> but <SEP> allows <SEP> them <SEP> to <SEP> do <SEP> so<tb> if <SEP> they <SEP> really <SEP> wish.<tb>Code <SEP> Entry <SEP> A <SEP> dialog <SEP> allowing <SEP> the <SEP> user <SEP> to <SEP> enter <SEP> their<tb> Dialog <SEP> registration <SEP> code.<tb>Ad <SEP> Window <SEP> A <SEP> window <SEP> or <SEP> portion <SEP> of <SEP> a <SEP> screen <SEP> displaying<tb> an <SEP> ad. <SEP> See <SEP> Figs. <SEP> 3A <SEP> and <SEP> 3B.<tb>Link <SEP> History <SEP> A <SEP> window <SEP> that <SEP> will <SEP> display <SEP> links <SEP> the <SEP> user <SEP> has<tb> Window <SEP> clicked <SEP> on, <SEP> i. <SEP> e., <SEP> ads <SEP> the <SEP> user <SEP> has <SEP> seen.<tb> <tb>Web <SEP> Pages <SEP> The <SEP> software <SEP> provider <SEP> advantageously <SEP> can <SEP> elect <SEP> to <SEP> restrict<tb> interactions <SEP> between <SEP> the <SEP> user <SEP> and <SEP> the <SEP> software <SEP> provider <SEP> to<tb> the <SEP> Internet <SEP> to <SEP> the <SEP> maximum <SEP> extent <SEP> possible. <SEP> This <SEP> will <SEP> allow<tb> the <SEP> software <SEP> provider <SEP> the <SEP> most <SEP> flexibility <SEP> in <SEP> how <SEP> the <SEP> software<tb> provider <SEP> deals <SEP> with <SEP> actual <SEP> users. <SEP> One <SEP> potential <SEP> list <SEP> of <SEP> the<tb> major <SEP> pages <SEP> is <SEP> provided <SEP> immediately <SEP> below, <SEP> although <SEP> these<tb> "pages"advantageously <SEP> may <SEP> be <SEP> groups <SEP> of <SEP> pages, <SEP> or <SEP> pages<tb> customized <SEP> to <SEP> match <SEP> the <SEP> demographics <SEP> of <SEP> a <SEP> given <SEP> user, <SEP> e. <SEP> g., <SEP> a<tb> customized <SEP> and/or <SEP> branded <SEP> version <SEP> of <SEP> Eudora <SEP> provided <SEP> by <SEP> a<tb> major <SEP> retailer, <SEP> e. <SEP> g., <SEP> a <SEP> private <SEP> label <SEP> version <SEP> of <SEP> Eudora<tb> provided <SEP> to <SEP> its <SEP> users <SEP> by <SEP> an <SEP> ISP.<tb>Freeware <SEP> A <SEP> page <SEP> that <SEP> allows <SEP> the <SEP> user <SEP> to <SEP> register<tb> Reg <SEP> Page <SEP> Freeware.<tb>Payware <SEP> A <SEP> page <SEP> that <SEP> accepts <SEP> payment <SEP> for <SEP> Eudora <SEP> Pro<tb> Reg <SEP> Page <SEP> and <SEP> returns <SEP> a <SEP> registration <SEP> code <SEP> to <SEP> the <SEP> user.<tb>Adware <SEP> Reg <SEP> A <SEP> page <SEP> that <SEP> allows <SEP> users <SEP> of <SEP> Adware <SEP> to <SEP> submit<tb> Page <SEP> their <SEP> registration <SEP> information <SEP> to <SEP> the <SEP> software<tb> provider.<tb>Lost <SEP> Code <SEP> A <SEP> page <SEP> that <SEP> helps <SEP> users <SEP> who <SEP> have <SEP> lost <SEP> their<tb> Page <SEP> registration <SEP> codes. <SEP> (May <SEP> require <SEP> human<tb> intervention)<tb> Update <SEP> Page <SEP> A <SEP> page <SEP> generated <SEP> for <SEP> a <SEP> user <SEP> that <SEP> lists <SEP> possible<tb> upgrades <SEP> and <SEP> the <SEP> latest <SEP> version <SEP> for <SEP> which<tb> he/she <SEP> is <SEP> registered.<tb>Archived <SEP> A <SEP> page <SEP> from <SEP> which <SEP> users <SEP> can <SEP> download <SEP> all<tb> Versions <SEP> versions <SEP> of <SEP> Eudora.<tb>Page<tb> <tb> Profile <SEP> Page <SEP> A <SEP> web <SEP> page <SEP> where <SEP> users <SEP> can <SEP> enter <SEP> their<tb> profile <SEP> information.<tb>Nag <SEP> A"Nag <SEP> Schedule"is <SEP> a <SEP> bracketed <SEP> set <SEP> of <SEP> numbers. <SEP> The<tb> Schedules <SEP> numbers <SEP> signify <SEP> # <SEP> of <SEP> days <SEP> since <SEP> the <SEP> start <SEP> of <SEP> a <SEP> trial <SEP> period.<tb>Users <SEP> will <SEP> be <SEP> nagged <SEP> on <SEP> the <SEP> days <SEP> indicated. <SEP> The <SEP> last <SEP> number<tb> signifies <SEP> what <SEP> happens <SEP> when <SEP> the <SEP> other <SEP> numbers <SEP> run <SEP> out; <SEP> the<tb> user <SEP> will <SEP> either <SEP> not <SEP> be <SEP> nagged <SEP> (0), <SEP> or <SEP> be <SEP> nagged <SEP> every <SEP> so<tb> many <SEP> days. <SEP> For <SEP> example, <SEP> a <SEP> schedule <SEP> of <SEP> [0,5,2] <SEP> means <SEP> the <SEP> user<tb> will <SEP> be <SEP> nagged <SEP> on <SEP> the <SEP> first <SEP> day, <SEP> the <SEP> sixth <SEP> day, <SEP> and <SEP> every <SEP> other<tb> day <SEP> thereafter.<tb> As mentioned above, the"software"advantageously can be provided as a single binary file containing the software, e. g., the Eudora software, which can be installed (if required) and employed by all users. This binary file will operate in one of three major modes of operation: Payware; Freeware; and Adware. The installation and operation of various functions of the software program according to the present invention will now be described in greater detail while referring to several state flow diagrams, which state diagrams illustrate the major user states and the transitions among them. In the flow state diagrams, the following conventions will be observed: (Raised grey squares are conceptual names for buttons in dialogs. (A few paths are labeled with menu items. These items can be used to bring up the window in question directly, without waiting for nags. In principle, any dialog or nag can be cancelled, leaving the user back in the initial state. Web pages cannot change user state or generate more dialogs; hence, all web pages lead back to the user's initial state. With the conventions noted above, the installation of the Eudora e-mail software will now be described while referring to Fig. 4, which depicts state transitions when a version of the software is installed by one of a new user, an old user, and an EP4 user. It will be noted that the software provider doesn't give the user the options to pay for the full feature set or to accept the software with a reduced feature set in the intro dialog. While the software provider will explain those options, e. g., via a dialog box similar to that illustrated in Fig. 4B, as well as the fact that the user can obtain these alternative versions of the software feature set by going through the Help menu, the software defaults to the Adware version. The path taken by EP4 users and box purchasers illustrated in Fig. 4A merits some elaboration. The Code Generator referred to in Fig. 4A advantageously is instantiated by the installer module of the binary file, not in the Eudora e-mail program itself. If the user is using the software's 4. x- > 4.3 update function, the software searches for a copy of EP4 and, on finding a copy of the software, the Code Generator permits the user to generate a RegCode file.If the user is running the installer out of the box, the installer permits RegCode generation without looking for a copy of EP4 first. It should be mentioned that the RegCode file so generated is special in that it contains a line saying"EudoraNeeds-Registration: YES."The Eudora e-mail software will notice this line of text, put the user into the unregistered state, and then nag the user to register the software. Once the user registers, the same registration code will be retransmitted to the user, and the Eudora e-mail software will silently accept it (since it will be the same as the current code), and turn off the need to register flag in the e-mail software. Fig. 5A illustrates a state flow diagram of the process by which theAdware user becomes a registered Adware user. It will be appreciated that, in the illustrated exemplary case, the registration process necessitates interaction between client computer 100a and a registration server 301, which are connected to one another via network 200. In Fig. 5A, the Adware user indicated in Fig. 4A registers with the software provider through several alternative mechanisms. For example, the Ad user may wish to register, and simply activates the"HELP"pulldown menu, which is available from the tool bar illustrated at the top of Fig. 3A, and selects the Payment & Registration option, as depicted in Fig. 5B. Alternatively, the Adware user may receive aNag box, i. e., a Nag dialog box, generated by the software at a predetermined time, as discussed more fully below. Finally, the Ad user may receive a registration via e-mail, i. e., a registration code generated by server 301 and transmitted to the client computer 100a by way of e-mail server 304. As shown in Fig. 5B, the Payment & Registration Window provides several selection buttons, which allow the Ad user to register the Adware, pay for the software, list all versions available to the user, customize or modify the ad stream by providing demographic information, enter a received registration code, and downgrade to the reduced feature set offered to Freeware users. SeeFigs. 5C-5G. It should be mentioned that the user can enter a registration code to become one of a registered Adware user, a registered Freeware user, and a registered Payware user. See Fig. 5F. It will be appreciated that the software operates in accordance with the same state flow diagram for Registered AdwareUsers, except that the Registered Adware User is not subjected to theRegistration Nag. The software provider advantageously can use a registration scheme with a self-validating registration code, so that databases do not need to be used to validate registrations. The algorithm for verification is intended to satisfy several conflicting constraints, i. e., it needs to be secure, yet easy to implement and not unduly burdensome for the user. The Eudora e-mail software checks its registration code at startup for validity. If the registration code is invalid, the user should be considered unregistered. If the user is a paid mode user, this will involve a switch to Sponsored mode, about which the user should be warned using a dialog box (not shown). This alert will be followed by an opportunity to reenter the code. The necessary inputs to generate the registration code are as follows: <tb> RegName <SEP> The <SEP> name <SEP> the <SEP> user <SEP> wishes <SEP> to <SEP> register <SEP> under. <SEP> The <SEP> software<tb> provider <SEP> will <SEP> imply <SEP> but <SEP> not <SEP> require <SEP> that <SEP> this <SEP> be <SEP> the <SEP> user's <SEP> real<tb> name. <SEP> The <SEP> only <SEP> thing <SEP> this <SEP> name <SEP> will <SEP> be <SEP> used <SEP> for <SEP> is <SEP> registration.<tb> Supplied <SEP> by <SEP> the <SEP> user. <SEP> When <SEP> the <SEP> software <SEP> provider <SEP> actually<tb> collects <SEP> this <SEP> name <SEP> from <SEP> the <SEP> user, <SEP> the <SEP> software <SEP> provider <SEP> will <SEP> ask<tb> for <SEP> it <SEP> in <SEP> terms <SEP> of <SEP> first <SEP> and <SEP> last <SEP> names, <SEP> called <SEP> RegFirstName <SEP> and<tb> RegLastName, <SEP> respectively. <SEP> RegName <SEP> is <SEP> built <SEP> by <SEP> concatenating<tb> RegFirstName, <SEP> a <SEP> single <SEP> space, <SEP> and <SEP> RegLastName. <SEP> Each <SEP> of <SEP> the<tb> first <SEP> and <SEP> last <SEP> names <SEP> is <SEP> limited <SEP> to <SEP> 20 <SEP> significant <SEP> characters;<tb> beyond <SEP> that, <SEP> characters <SEP> will <SEP> be <SEP> ignored.<tb> RegMonth <SEP> The <SEP> date <SEP> of <SEP> the <SEP> registration, <SEP> expressed <SEP> as <SEP> the <SEP> number <SEP> of <SEP> months<tb> since <SEP> Jan <SEP> 1,1999, <SEP> e. <SEP> g., <SEP> 8 <SEP> bits <SEP> (20 <SEP> years). <SEP> All <SEP> 1's <SEP> is <SEP> reserved <SEP> are <SEP> for<tb> "never <SEP> expires"situations.<tb>Product <SEP> A <SEP> numeric <SEP> code <SEP> indicating <SEP> what <SEP> product <SEP> the <SEP> registration <SEP> is <SEP> for.<tb>The <SEP> user <SEP> will <SEP> choose <SEP> the <SEP> product; <SEP> the <SEP> software <SEP> provider <SEP> will<tb> translate <SEP> that <SEP> choice <SEP> into <SEP> an <SEP> 8-bit <SEP> code.<tb> It will be appreciated that a plurality of RegCode algorithms advantageously can be employed in generating a self-validating registration code. In brief, the software provider takes the inputs listed above, checksums them, mixes the inputs (including the RegName) and the checksum together in according to any one of a variety of algorithms, and encodes the result as a 16bit number string. It will also be appreciated that the encoding and bit-mixing can be reversed and then, together with the RegName, the checksum can be used to verify the validity of the registration code. It should be noted that the software provider will store registration codes separately for Freeware (Eudora Light), Adware (Sponsored) and Payware (Eudora Pro) software modes. Acceptance of a registrations code for one mode of operation does not imply that the registration codes for the other modes should be destroyed. Once the registration code has been generated, the user must somehow enter the valid RegCode into the Eudora e-mail client. This can be accomplished in one of three ways: Manually. Users can type or paste values into the Enter Code dialog box. See Fig. 5F. (Windows Registry. At Eudora startup, the software will look for the RegCode in the Windows registry (e. g., Software\Qualcomm\Eudora\Check, FName, LName, RCode). The values should be copied into the preferences register or associated lookup table of the e-mail client, if these preferences are found and valid. (RegCode File. At Eudora startup, the software will look for a file in the application software folder named"RegCode. dat," in an exemplary case. The values should be copied into the preferences register or associated lookup table of the e-mail client, if these preferences are found and valid. It should also be mentioned that the software provider will allow a special-case MIME part to be mailed to the Eudora e-mail client. The user receiving this part will automatically be asked to verify and enter the information. He/she can also execute the attachment again later. However, he/she cannot forward the attachment to anyone else using the Eudora e-mail client, because a special Content-Type attribute ("regCode") is required to activate the part, and the Eudora e-mail client can't send those. The format of the MIME part (and the RegCode file) is that of a text file containing RFC822-header-style fields. It has a registered MIME type of application/vnd. eudora. data. The fields included in the part are: <tb> Eudora-File-Type <SEP> This <SEP> is <SEP> always <SEP> the <SEP> first <SEP> field, <SEP> and <SEP> describes <SEP> what <SEP> sort<tb> of <SEP> information <SEP> the <SEP> rest <SEP> of <SEP> the <SEP> file <SEP> contains. <SEP> Its <SEP> value<tb> will <SEP> be <SEP> either"regFile"or"Profile."<tb> Eudora-First-Name <SEP> The <SEP> first <SEP> (given) <SEP> name <SEP> of <SEP> the <SEP> registrant, <SEP> in <SEP> US-ASCII.<tb>Eudora-Last-Name <SEP> The <SEP> last <SEP> (family) <SEP> name <SEP> of <SEP> the <SEP> registrant, <SEP> in <SEP> US-ASCII.<tb>Eudora-Reg-Code <SEP> The <SEP> registration <SEP> code <SEP> as <SEP> produced <SEP> by <SEP> the <SEP> registration<tb> system<tb> Profile <SEP> Profile <SEP> information. <SEP> This <SEP> takes <SEP> the <SEP> form <SEP> of <SEP> a <SEP> relatively<tb> short, <SEP> e. <SEP> g., <SEP> 127 <SEP> bytes, <SEP> ASCII <SEP> string. <SEP> A <SEP> profile <SEP> is<tb> generated <SEP> for <SEP> each <SEP> user <SEP> during <SEP> the <SEP> registration<tb> process.<tb> Eudora-Needs-If <SEP> this <SEP> field <SEP> contains"YES", <SEP> then <SEP> the <SEP> user <SEP> should <SEP> be<tb> Registration: <SEP> nagged <SEP> to <SEP> register <SEP> their <SEP> copy <SEP> of <SEP> Eudora. <SEP> This <SEP> is <SEP> used<tb> by <SEP> installers <SEP> that <SEP> generate <SEP> RegCodes <SEP> that <SEP> the <SEP> software<tb> provider <SEP> otherwise <SEP> would <SEP> not <SEP> have <SEP> in <SEP> its <SEP> database.<tb>Mailed-To <SEP> This <SEP> is <SEP> the <SEP> address <SEP> the <SEP> information <SEP> was <SEP> mailed <SEP> to. <SEP> If<tb> this <SEP> field <SEP> is <SEP> present <SEP> and <SEP> does <SEP> not <SEP> match <SEP> any <SEP> of <SEP> the<tb> user's <SEP> personalities <SEP> or"me"nickname, <SEP> the<tb> information <SEP> should <SEP> not <SEP> be <SEP> acted <SEP> I<tb> It should be noted that the Eudora-File-Type field must be present. The other fields listed above may or may not be present. It will be appreciated from the discussion above that RegCodes mailed to the user should be validated prior to use. In order to be used, a RegCode should meet the following tests: (Validity-An invalid RegCode should be ignored. (Directness-The mailed-to field of the RegCode should contain an address for one of the user's personalities or be in the user's"me" nickname. (Applicability-A new RegCode should not automatically override an existing valid RegCode. The only exceptions to this policy are that a Payware mode RegCode should override a Freeware or Adware RegCode, and a Payware mode RegCode that is the same as the user's existing Payware mode RegCode can be used to disable the"Eudora Needs-Registration"Nag. Once the RegCode has been determined to meet the above tests, the user should be asked to accept the code. An exemplary acceptance dialog box is illustrated in Fig. 5F. As mentioned above, the registration code is self-validating, since one part is a function of the other. However, there is another sense of"validation" to be considered, i. e., whether or not the registration code is"valid"for use with a particular version of Eudora. This is accomplished by comparing theExpMonth in the registration code with a BuildMonth field the software provider will put into the application (in a place that cannot be overwritten by plug-ins, settings, etc.). If the ExpMonth and the BuildMonth correspond, the registration is deemed valid by the e-mail client. Fig. 6A illustrates a state flow diagram of the process by which a Freeware user can become a Registered Free User. It will be appreciated that the state flow diagrams of Figs. 5A and 6A are similar in many respects. However, the state flow diagram of Fig. 6A allows for an additional Nag dialog box, i. e., the so-called Feature Nag dialog box pictured in Fig. 6B, to remind both the Free User and the Registered Free User of the enhanced features available to Adware and Payware users. With respect to Freeware Users and Registered Freeware Users, it will be appreciated that the Registered FreewareUsers will not receive the Registration Nag dialog box. It will be appreciated that the state flow diagram illustrated in Fig. 6A is very similar to that applicable to the Adware Users (Fig. 5A), with the exception that FreewareUsers are given the option to try the full features rather than enter their demographic information. It should also be mentioned at this point that all users will receive anUpdate Nag dialog box (not shown) at a predetermined interval. Eudora checks the Update Page once per week during an e-mail session. If the Update page has changed, the user is nagged to update the Eudora e-mail software.Even if the page hasn't changed, the user is nagged on a 30-day schedule to check for updates, to ensure that he/she has the latest software version. See the state flow diagram of Fig. 7A. The Update Nag presents the user with versions to which he/she is entitled to upgrade (if any). See Fig. 7B. The Nag itself is anHTML document with links to versions of the Eudora e-mail software for the user to download. Fig. 8 illustrates an exemplary state flow diagram of the process by which a Box user can become a Paid user, i. e., a Payware user. It will be appreciated that the only Nag the software provider presents specifically to theBox users is the Registration Nag. Once a Box user registers, the Box user is converted into a normal Paid user. It should be mentioned however that the payment date for the Box user is set to a specific value by the software provider, so that the software provider can control what versions of the software the Box user will receive, e. g., the period of time for which the user will receive updates from the software provider for free going forward. Having introduced the concept of nagging, this would be a convenient point to discuss various features of nagging implemented in the software according to the present invention. Two major issue are (1) how the software provider nags the user, and (2) when the software provider nags the user. Ideally, Nag Windows are modeless windows. The user can close them using close boxes, or dismiss them by taking one of their action items, or simply leave them open and let them drift wherever they will in the window list. Due to implementation constraints, Windows Nag Windows will be slightly different in behavior than MacOS Nag Windows, which are discussed below.The Nag Windows are floating windows; the software provider expects that the user will probably dismiss the Nag Window in fairly short order. It will be appreciated that the Nag Windows will not, however, stop background tasks from executing. It should be mentioned that there is at most one Nag Window of each variety open at a time; old windows of the same variety advantageously will be recycled. That is, if a given Nag Window is still open the next time the user is due to be nagged, that window will be reused and brought back to the top of the window stack. It should also be mentioned that all Nags applicable to the user should be available to the user by selection from the Help menu, so that the user who dismisses one of the Nag Windows inadvertently can deliberately nag him/her-self if he/she wishes, although such manual Nag invocations do not reset the Nag's timer. Preferably, Nag Windows will be opened on top of all other windows, and no automatically opened windows, including, for example"Tip of the Day" and other dialog boxes and excluding other Nag Windows, will ever be placed above them until the user has manually brought another, non-Nag Window above them. Due to the implementation constraints in the Windows version of the Eudora e-mail software, the only windows that can obscure Nags would be other floating windows. It will be appreciated that this is chiefly due to the requirement that Multiple Document Interface (MDI) child windows be maximizable. It should be mentioned that is a standard Windows interface used by many popular Windows applications and utilities, such as the Windows Program Manager, and the Windows File Manager; the MDI interface is also part of the Common User Access (CUA) standard set by IBM. Each MDI compliant application enables you to open child windows for file-specific tasks such as editing text, managing a database, or working with a spreadsheet, to name but a few of the possible tasks. Fig. 10 illustrates a flow chart for Nag Window display in MacOS versions of the Eudora e-mail software according to an exemplary embodiment of the present invention. In Fig. 10, the software presents just the In mailbox, as denoted by the symbol (1), i. e., time (1). The Eudora e-mail software then determines that it needs to nag the user, and places the Nag atop the mailbox, as denoted by the symbol (2). Some mail arrives in the"Fresh Meat"mailbox. Ordinarily, this would open on top. However, since there is a"new"Nag being displayed by the software, i. e., one the user has not manually sent behind anything, the"Fresh Meat"instead opens below the Nag, as denoted by symbol (3). The user manually brings Fresh Meat to the front, as denoted by symbol (4).After that, when mail arrives in More Meat, the Nag is no longer new, and MoreMeat can be opened on top in the normal manner, as denoted by the symbol (5). The placement of Nag Windows in any of the Windows environments is, in general, considerably simpler. Nag Windows simply float outside the MDI box, above other floating windows, until the user closes them. The exception to this rule is the Update Nag, which acts like a MacOS Nag Window, if the user assumes that the entire Macintosh diagram takes place inside an MDI box. Note particularly that this indicates that the Update Nag may be maximized in theWindows environment. Although the basic concept of Nag Schedules was introduced above, a more detailed discussion of Nag Schedules at this point would facilitate the understanding of certain aspects and features of the software according to an exemplary preferred embodiment of the present invention. In the Eudora email software, each schedule is a set of numbers representing (save for the last) the number of days since a given date (the Nag base). The software provider further must keep track of the last time the user was nagged (the last lag). Note that both the Nag base and last Nag should be tracked separately for each type of Nag; the software provider must not mix values for Registration Nags andUpdate Nags, for example. The last number of the Nag Schedule is a repeat interval. Once the other Nags are all exhausted, the user is nagged each time this last number of days passes. The best way to understand a Nag Schedule is to view the schedule as a timeline, as illustrated in Fig. 11. This particular timeline is for a Nag Schedule of [0,4,9,12,3]. Note that the Nags which will occur at the 15 and 18 day points are there because of the final number, the repeat interval (of 3 days). Thus, in Fig. 11, the user is due to be nagged if there is a Nag day greater than the last Nag and less than or equal to the current day. If more than one Nag day has passed, the user is still nagged only once. It should be mentioned that once the Nag Window has been opened, the last Nag is reset to the current day. It should also be mentioned that a final Nag interval of 0 indicates that the Nag is not done any more after the defined period has expired. It will be appreciated that the Eudora e-mail software advantageously includes a software subroutine which determines whether anyNags are due at application startup and at the completion of each mail check.With respect to the latter case, the software checks the modification date on theUpdate Page once per week during a mail check. If the Update Page has been modified during the past week, the software provider will download update information during the mail check, and nag the user to update his/her software, e. g., the Eudora e-mail software. See Fig. 7B. Finally, it will be noted that when a user's state changes so that an open Nag is no longer relevant, thatNag is closed and no longer displayed. The preceding discussion also touched briefly on various issues with respect to ads; these issues will be developed more fully immediately below.More specifically, the major client issues involving ads are how the software displays the ads, when the software displays the ads, how the software obtains the ads, how the software provider obtains and transmits demographic information, and how the software provider verifies that ads are actually being displayed. Referring again to Fig. 3A, the main window of the Eudora e-mail software shows a squarish ad and three ad buttons in opposite corners of the main window. It should be mentioned that this particular squarish ad is144 pixels high by 128 pixels wide; the software will accommodate ads as large as 144 pixels by 144 pixels. It will be appreciated that the area of the window usable by the mailboxes has been reduced approximately 38%; however, it will also be appreciated that the content area has been left untouched. Fig. 3B illustrates an alternative main window where a small graphic or placard is employed, e. g., in the lower right corner, to indicate that the main window is sponsored. It will be appreciated that the actual information that the software provider can accept from advertisers will be relatively simple. For standard ads, such as that depicted in the lower left-hand corner of Fig. 3A, the ad will consist of an image file, e. g., a GIF file, a PNG file, a JPEG file, etc., of not more than 15K, and not more thanl44 pixels tall by 144 pixels wide. Preferably, this image file will employ the Web Safe Color Palette. This palette, which is sometimes to as the Browser-Safe Palette, contains only 216 colors out of a possible 256 colors definable by 8-bits. The remaining 40 colors vary on Macs and PCs. By eliminating the 40 variable colors, this palette is optimized for cross-platform use. Moreover, the image file advantageously will be associated with a single uniform resource name (URN) to which users who click on the ad will be directed. Each advertiser will also specify the desired scheduling information for the ad, as discussed in greater detail below. In order to facilitate the transmission of the ad to the software provider, e. g., QUALCOMMINCORPORATED, the advertiser may wrap the ad in HTML. The software provider advantageously can also employ HTML-wrapped ads, since this will allow the software provider to include ad parameters as META tags in theHTML page, specify the link address, etc. Moreover, the Toolbar icons will be requested in GIF format as well, but will actually be delivered to the client in a composite format and transformed into standard icons. In addition, placards for sponsors of the Freeware version illustrated in Fig. 3B should be no more than 31 pixels tall, and on the order of 88 pixels wide, though the precise width can be varied at runtime. It should be mentioned here that when the user clicks on an ad, the software provider will normally take the user to the software provider's clickthrough counter and then redirect the user's browser to the link listed with the ad. The click-through counter advantageously can be one of the software provider's servers, e. g., one of servers 302 and 303. It will be appreciated that this will require that the software provider will compose a URN which includes a server name, some tracking information, and the ultimate destination URN, and then the server will redirect the user's browser to the destination URN. One complication occurs if the user is offline at the time that the click through is attempted. When the user is offline, several possible actions by the software are possible. For example, the software could initiate an online session. Alternatively, the software could simply flag the link using the link history facility. See Fig. 12, which depicts a window/menu that the software maintains, similar to the history lists maintained by most browsers. When the ad is clicked while the software is offline, the software advantageously adds the link to the link history window, and flags this link so that the user knows he/she had wanted, but was unable, to visit that site during a previous e-mail session. Moreover, the software advantageously may be constructed to permit the user's browser respond to the click-through. It will be appreciated that some browsers have sophisticated features of their own for dealing with offline conditions, and the software provider shouldn't discount the idea that the user might wish to rely on them. Alternatively, the software may permit transmission of the link to the browser for subsequent handling by the browser when it is online, i. e., the software can allow the user to tell the software provider to send the link to the user's browser the next time he/she is online. In summary, the software provider will, in an exemplary and nonlimiting case, mandate that the following standard for all advertisements submitted by advertisers: No larger than 144x144 pixels. Ads smaller than this will be centered in a 144x144 window and surrounded by the standard frame color. GIF or JPEG. The software provider advantageously can convert the GIF file to a PhotoShop (PNG) file, but this is transparent. It should be noted that the software provider will not presently accept PNG ads directly, because of the gamma bugs in PhotoShop. No larger than 15K. This will reduce the bandwidth required to transmit the ad as well as the goodwill cost of user bandwidth. No animation. This is a cornerstone of the"unobtrusive"message to users aspect of exemplary embodiments of the present invention. (A single URN of not more than 900 characters. There are suspected limits of 1K on URN size. Limiting the customer's URN to 900 characters will allow the software provider to annotate the URN and still stay within the 1K limit. (A user-friendly title string of not more than 31 characters. This string will be displayed in the link history window, and should be something users will relate to. Use Web Safe Color Palette. This 216-color palette optimized for users with 256-color systems, as mentioned above. It should be mentioned that Toolbar buttons, i. e., the buttons in the upper right-hand corner of Fig. 3A have the same requirement as standard ads, except for the following: (Both 16x16 and 32x32 sizes required. These are the sizes the client supports, the software provider needs them both. (GIF only. The software will not render JPEG images in the toolbar. With respect to the co-brand spot ad illustrated in the lower right-hand corner of Fig. 3B, the spot has the same requirement as standard ads, except for the following: No larger than 95 pixels wide by 31 pixels high. (GIF only. One troublesome issue regarding the ad placement illustrated in Fig. 3A is the relative ease with which a user might be able to hide the ads from view by placing a small window directly over the ad. Advantageously, the software performs a check to determine that the ad is both onscreen and uncovered. If the screen state does not satisfy both of these criteria, the software will either nag the user to uncover the ad or automatically re-order the windows so that the ad is uncovered. If the user persists in covering the ad for a predetermined period of time, the software will automatically devolve to freeware mode. Since one of the major reasons for providing an Adware version of software such as the Eudora e-mail program is to provide a mechanism by which advertisers can subsidize the cost of the software, the software provider is clearly motivated to ensure that all Eudora users are actually looking at the ads. Stated another way, displaying an ad on the screen of the client computer 100a, for example, while the user is in another room does not justify the expense of the ad for the advertiser. For that reason, the software includes functions which permit measurement of the actual time that the user is in front of the computer while the ad is present. Absent some sort of positive ocular fastening device, the best thing the software can do to measure user attention is to monitor for user input to the client computer 100a, thus verifying the user's presence in front of the display device 208. Given that the primary user input devices to the client computer 100a are the mouse 204 and the keyboard 203, the e-mail client will monitor for both mouse and keyboard operation by the user when the Adware version of the Eudora e-mail client is frontmost, and periodically report this activity back to, for example, the software provider. In other words, the user will be considered"present and accounted for"if the mouse moves significantly, if a mouse button states change, or if keys are pressed or released. Moreover, the software will consider a period before and after such an event as"face time"for the ad. In an exemplary case of the software according to the present invention, the software measures the period and refers to the total length of this period as kFaceInterval. There is no need to be overly precise about this value, e. g., a kFaceInterval of sixty (60) seconds, which begins with a user event, is employed in the exemplary, non-limiting case being discussed. Having discussed the format of the ads being displayed by the software, a detailed discussion of the methodology by which the ads are actually obtained for display will now be presented. The general methodology for obtaining ads for display is to connect to a QUALCOMM INCORPORATED site during a mail check, or some other time when the software senses a live network connection, and download ads into a local cache. It will be appreciated that the act of downloading the ad can be the trigger for billing the advertiser, in order to avoid the necessity of collecting billing information from individual clients. In contrast, proprietary systems such as that provided by JUNO, upload ad display data to the designated e-mail server whenever the user accesses his/her e-mail account for any reason. In order to make reasonable decisions about how to download ads, the software provider needs to have some idea of what impact the ad downloads will have on users. In order to assess that impact, the software provider must make assumptions (or gather information) about what a typical Eudora user's habits are, and what the ads will be like in terms of transmission characteristics. Part of the Adware process is to add instrumentation in the software client so that the software provider can begin to answer these questions intelligently, rather than by guesswork. However, one must start with some basic assumptions. For example, Fig. 13A is a table listing the assumptions used in determining the impact of ad transmission on e-mail program operations; Fig. 13B is a table listing the bandwidth requirements in terms of the subscriber base versus of the number of new ads to be downloaded each day to the subscribers.The implications of these calculations are as follows. Given that the goal is for an average turnover of an ad is, for example, three days, the top line in the table illustrated in Fig. 13B would be the one used by the software provider. The worst-case, i. e., maximum bandwidth, scenario would be to turn over, for example, 25 ads a day. These values are highlighted in the table of Fig. 13B. In order to determine what ads are to be shown for a particular user class, as well as in order to transmit particular ad parameters, the software provider advantageously employs a PlayList. The PlayList is in its essence a list of URNs from which to fetch the actual ads as well as a set of attribute-value pairs, on a per-ad basis. The exact format of the PlayList is discussed in greater detail shortly. PlayLists will specify the complete set of ads the client should have, along with parameters for displaying those ads, as discussed immediately below. It should be noted that ads may appear in a PlayList but not be scheduled for display for a long time (or even at all). The presence of such ads in the PlayList will cause the client to retrieve the ads for storage on the client for future display. The general requirements for the PlayList are as follows: 1) The request for a PlayList will contain information to help thePlayList server determine what ads a copy of Eudora is required to fetch. 2) The PlayList can also contain parameters for Eudora as a whole, including the ability to modify how often New PlayLists are checked for. 3) PlayLists are allowed to specify whether or not they should replace all older PlayLists or merely be merged with them. It should be mentioned that the merge function will allow a more web-like advertising model, e. g., a model employing a rotating ad pool, should the software provider choose to employ such a model. The basic ad fetch process will now be described while referring to Fig. 14, which is a state flow diagram of an exemplary ad fetch process according to the present invention, and Fig. 1. First, the client software running on client computer 100a identifies itself to the PlayList server 302, e. g., ads. eudora. com. The client software, e. g., the Eudora software, provides to the PlayList server 302 basic client information and the ID of the PlayList the client software currently has installed. The ads. eudora. com server responds with either an indication that the current PlayList is still valid, uses an Hyper Text Transfer Protocol (HTTP) redirect to send the client to a different PlayList server, e. g., another PlayList server 302', or responds directly with the New PlayList fromPlayList server 302. See Fig. 14. In the event that the New PlayList is received from PlayList server 302, the client software compares the New PlayList with its current set of ads, and begins fetching ads not resident in the e-mail client's ad cache from one of more ad servers, e. g., the ad server 303 illustrated in Fig. 1, according to URNs included in the PlayList. The client software also deletes ads not currently appearing in the PlayList. Advantageously, the client software performs a check for a New PlayList every three days. It should be mentioned that the 3 day interval betweenPlayList checks is arbitrary and applicable only to the exemplary preferred embodiments of the present invention being discussed. It should also be mentioned that the ads preferably will be fetched as needed to fill the PlayList, possibly over many mail checks. Moreover, the ad fetch process will be limited to one minute per mail check, irrespective of the tasking of either the e-mail client software or the client computer 100a. After one minute, the client software will disconnect from the ad server 303. This will often mean that the email client software has not filled the PlayList when the ad fetch operation is terminated. This is acceptable. The software will utilize the available ads while the remaining ads are being downloaded. Furthermore, the software provider advantageously can provide for multiple servers on a peer with ads. eudora. com server 303. It will be appreciated that these servers will provide extra ads for some Eudora user communities, e. g., all of the users at a company serviced by one ISP, etc. Stated another way, an ISP which provides additional services such as local and long distance telephone access may wish to cross promote these services to its own customer base. Thus, the ISP advantageously can contract for such localized promotion. The PlayLists transmitted to the ISP's branded Adware e-mail clients would be linked to an ad server 303"maintained by the ISP in that instance. Given a set of available ads, the software still needs to choose which ad to display next. It will be appreciated that this is a matter of much excitement in the Web ad industry, where many choices are allegedly made to maximize the profit of the advertiser. In particular, ads that generate better user response are preferred because such ads generate extra revenue-such ads are frequently tied to the content of the Web page upon which they are displayed. However, it is unlikely that either the software provider or the client software will be able to derive a significant benefit from the ad scheduling algorithms currently run on ad services. This is in part due to the fact that the ads being displayed by the email client software are divorced from the content being displayed, i. e., neither the software provider nor the client software are cognizant of the content of any particular ad that the user is looking at, and in part due to the fact that the email client software will be requesting ads in a batch for later display, rather than requesting them in"real time". As mentioned above, the PlayLists provide certain global inputs to the ad scheduling algorithm, including the parameters listed in the table immediately following. PARAMETER <SEP> DESCRIPTION<tb> FaceTimeQuota <SEP> The <SEP> amount <SEP> of <SEP> time <SEP> per <SEP> day <SEP> that <SEP> the <SEP> e-mail <SEP> client <SEP> software <SEP> is<tb> supposed <SEP> to <SEP> show <SEP> the <SEP> ad.<tb>RerunInterval <SEP> The <SEP> age <SEP> beyond <SEP> which <SEP> ads <SEP> should <SEP> not <SEP> be"rerun"after <SEP> the<tb> "runout", <SEP> i. <SEP> e., <SEP> maximum <SEP> permissible, <SEP> time <SEP> is <SEP> passed.<tb> In addition, the per-ad inputs in the PlayList associated with ad scheduling are set forth in the following table. <tb>PARAMETER <SEP> DESCRIPTION<tb> ShowFor <SEP> This <SEP> is <SEP> the <SEP> number <SEP> of <SEP> seconds <SEP> the <SEP> ad <SEP> should <SEP> be <SEP> shown <SEP> for<tb> at <SEP> any <SEP> given <SEP> time. <SEP> This <SEP> number <SEP> might <SEP> be <SEP> small, <SEP> like <SEP> a <SEP> TV<tb> ad <SEP> (e. <SEP> g., <SEP> 30), <SEP> or <SEP> large, <SEP> more <SEP> like <SEP> a <SEP> billboard <SEP> (e. <SEP> g., <SEP> 3600 <SEP> for<tb> one <SEP> hour, <SEP> uninterrupted).<tb>ShowForMax <SEP> Maximum <SEP> total <SEP> amount <SEP> of <SEP> time <SEP> to <SEP> show <SEP> this <SEP> ad. <SEP> The <SEP> ad <SEP> is<tb> exhausted <SEP> after <SEP> this <SEP> time, <SEP> and <SEP> should <SEP> be <SEP> discarded <SEP> once<tb> new <SEP> ads <SEP> arrive.<tb>DayMax <SEP> Maximum <SEP> number <SEP> of <SEP> times <SEP> per <SEP> day <SEP> to <SEP> show <SEP> this<tb> particular <SEP> ad.<tb>BlackBefore <SEP> The <SEP> amount <SEP> of <SEP> time <SEP> the <SEP> ad <SEP> window <SEP> should <SEP> be <SEP> blank<tb> before <SEP> the <SEP> ad <SEP> is <SEP> displayed.<tb>BlackAfter <SEP> The <SEP> amount <SEP> of <SEP> time <SEP> the <SEP> ad <SEP> window <SEP> should <SEP> be <SEP> blank <SEP> after<tb> the <SEP> ad <SEP> is <SEP> displayed. <SEP> BlackAfter <SEP> runs <SEP> concurrently <SEP> with <SEP> the<tb> blackBefore <SEP> of <SEP> the <SEP> next <SEP> ad, <SEP> so <SEP> that <SEP> the <SEP> actual <SEP> time<tb> between <SEP> ads <SEP> is <SEP> max <SEP> (blackAfter, <SEP> blackBefore), <SEP> not<tb> blackAfter <SEP> + <SEP> blackBefore.<tb>StartDT <SEP> Date/time <SEP> (time <SEP> zone <SEP> optional) <SEP> before <SEP> which <SEP> the <SEP> ad<tb> should <SEP> not <SEP> run.<tb>EndDT <SEP> Date/time <SEP> (time <SEP> zone <SEP> optional) <SEP> after <SEP> which <SEP> the <SEP> ad <SEP> should<tb> not <SEP> run.<tb> There are some values the software provider computes that are also input to the scheduling algorithm. These global values are listed in the table which follows. PARAMETER <SEP> DESCRIPTION<tb> AdFaceTimeToday <SEP> The <SEP> total <SEP> amount <SEP> of <SEP> ad <SEP> Face <SEP> time <SEP> for <SEP> the <SEP> current <SEP> day<tb> during <SEP> which <SEP> regular <SEP> ads <SEP> have <SEP> been <SEP> shown.<tb>TotalFaceTimeToday <SEP> The <SEP> total <SEP> amount <SEP> of <SEP> Face <SEP> time <SEP> for <SEP> the <SEP> current <SEP> day.<tb> The software also keeps track of and reports these values to the software provider for every ad: PARAMETER <SEP> DESCRIPTION<tb> NumberShownToday <SEP> The <SEP> number <SEP> of <SEP> times <SEP> an <SEP> ad <SEP> has <SEP> been <SEP> shown <SEP> on <SEP> the<tb> current <SEP> day.<tb>ThisShowTime <SEP> The <SEP> amount <SEP> of <SEP> face <SEP> time <SEP> the <SEP> current <SEP> ad <SEP> has <SEP> received.<tb>LastShownDate <SEP> The <SEP> last <SEP> date/time <SEP> that <SEP> the <SEP> e-mail <SEP> client <SEP> software<tb> showed <SEP> this <SEP> I<tb> Advantageously, the software provider implements three major states of the ad scheduler, the regularState, the runoutState, and the rerunState. In the regularState, the e-mail client software advantageously is showing regular ads and accounting for them. It will be appreciated that this is what actually generates charges for the bulk of the ads displayed on the e-mail client. In contrast, the runoutState is selected when the e-mail client software has shown enough regular ads to fill the assigned faceTimeQuota, and the ad cache includes one or more runout ads available for showing. In the rerunState, the e-mail client software has exhausted both its regular ad quota and the runout ads, i. e., the e-mail client software is now reshowing the regular ads, but the software provider is not charging for them. It should be mentioned here that the software provider advantageously can provide a custom installer to various ISPs, book publishers, etc., that will label or brand the copies of Eudora that they have distributed. The software provider will then credit these distributors with a percentage of the ad revenue generated by the clients they distribute. It will be appreciated that these credits may be offset by cross promotional activities associated with each branded version of the Adware e-mail client, for the reasons previously discussed. Given the discussion presented immediately above, a more detailed explanation of various aspects of the exemplary e-mail client software according to the present invention can now be provided. As previously noted, the PlayList is a way to control the fetching and display of ads in software, e. g., in the Eudora e-mail client. The primary benefits associated with the PlayList are the separation of ad parameters from ad images, insulation of the Eudora client from intimate knowledge of ad image servers, and centralized server intelligence in ad distribution, without requiring user registration or centralized user databases. Thus, it will be appreciated thatPlayLists are extremely malleable objects. In an exemplary case, the PlayLists can exert varying degrees of control over how the Eudora client behaves, from specifying the exact set of ads Eudora runs to simply transmitting abstractURNs which will choose their own ads. If PlayLists are used to their fullest advantage, they will give the software provider a powerful tool in controlling ad display in software such as Eudora; if PlayLists are later deemed irrelevant, the PlayLists cost the software provider one extra, brief network connection per day. As discussed above with respect to Figs. 1 and 14, the client computer 100a connects to a PlayList server 302 (which may redirect to a different server 302') via a network 200. Then, the PlayList server 302 returns a PlayList to the client computer 100a via the network 200. Subsequently, the e-mail client software on the client computer fetches the ads specified in the PlayList. The PlayList Request, which is sent by the Eudora client to the PlayList server 302 in order to initiate the ad fetch process, is not a simple burst of binary code. The PlayList Request is a block of extensible markup language (XML) code employed to provide the server 302 with sufficient information to build or select the proper New PlayList for the user. The information in the PlayList Request is shown in the following table. <tb>PARAMETER <SEP> DESCRIPTION<tb> UserAgent <SEP> This <SEP> is <SEP> a <SEP> string <SEP> identifying <SEP> the <SEP> application <SEP> requesting <SEP> the<tb> PlayList, <SEP> its <SEP> version <SEP> number, <SEP> and <SEP> the <SEP> platform <SEP> on <SEP> which <SEP> it <SEP> is<tb> running.<tb>PlayList <SEP> (s) <SEP> This <SEP> identifies <SEP> the <SEP> PlayList <SEP> (s) <SEP> that <SEP> the <SEP> client <SEP> is <SEP> currently <SEP> using.<tb>This <SEP> may <SEP> have <SEP> multiple <SEP> values <SEP> if <SEP> the <SEP> client <SEP> is <SEP> working <SEP> off<tb> more <SEP> than <SEP> one <SEP> PlayList.<tb>Entry <SEP> A <SEP> list <SEP> of <SEP> the <SEP> id's <SEP> of <SEP> the <SEP> ads <SEP> recently <SEP> shown <SEP> by <SEP> this <SEP> client. <SEP> The<tb> entries <SEP> are <SEP> nested <SEP> inside <SEP> the <SEP> PlayList <SEP> to <SEP> which <SEP> they <SEP> belong.<tb>Each <SEP> entry <SEP> can <SEP> have <SEP> zero <SEP> or <SEP> more <SEP> of <SEP> the <SEP> following <SEP> associated<tb> attributes <SEP> or <SEP> types <SEP> (the <SEP> number <SEP> following <SEP> the <SEP> equal <SEP> sign <SEP> (=)<tb> indicates <SEP> an <SEP> exemplary <SEP> value <SEP> attached <SEP> to <SEP> the <SEP> attribute <SEP> which<tb> is <SEP> used <SEP> to <SEP> achieve <SEP> the <SEP> description <SEP> of <SEP> the <SEP> entry <SEP> attributes<tb> providedbelow):<tb> Active="0"The <SEP> ad <SEP> is <SEP> no <SEP> longer <SEP> being <SEP> shown.<tb>IsRunout="1"The <SEP> ad <SEP> is <SEP> a <SEP> runout <SEP> ad. <SEP> This <SEP> saves <SEP> the<tb> server <SEP> having <SEP> to <SEP> do <SEP> a <SEP> lookup <SEP> on <SEP> the<tb> ad.<tb>IsSponsor="1"The <SEP> ad <SEP> is <SEP> a <SEP> sponsorship <SEP> ad, <SEP> to <SEP> be<tb> shown <SEP> in <SEP> place <SEP> of <SEP> the <SEP> QUALCOMM<tb> logo. <SEP> See <SEP> Fig. <SEP> 3B.<tb>IsButton="1""The <SEP> ad <SEP> is <SEP> a <SEP> toolbar <SEP> button.<tb>Deleted="1""The <SEP> ad <SEP> has <SEP> been <SEP> hidden <SEP> by <SEP> the <SEP> user.<tb>This <SEP> is <SEP> allowed <SEP> only <SEP> for <SEP> toolbar <SEP> ads.<tb> <tb>FaceTime <SEP> This <SEP> lists <SEP> the <SEP> amount <SEP> of <SEP> face <SEP> time <SEP> the<tb> user <SEP> has <SEP> used <SEP> in <SEP> the <SEP> last <SEP> seven<tb> calendar <SEP> days. <SEP> This <SEP> allows <SEP> the <SEP> server<tb> to <SEP> determine <SEP> how <SEP> many <SEP> ads <SEP> the <SEP> client<tb> is <SEP> likely <SEP> to <SEP> be <SEP> able <SEP> to <SEP> display. <SEP> The<tb> value <SEP> for <SEP> the <SEP> current <SEP> day <SEP> is <SEP> the <SEP> greater<tb> of <SEP> today's <SEP> value <SEP> (see<tb> faceTimeUsedToday) <SEP> and <SEP> last <SEP> week's<tb> value <SEP> for <SEP> today.<tb>FaceTimeLeft <SEP> This <SEP> is <SEP> a <SEP> total <SEP> of <SEP> the <SEP> amount <SEP> of <SEP> face<tb> time <SEP> requested <SEP> by <SEP> the <SEP> ads <SEP> still <SEP> left <SEP> in<tb> the <SEP> client's <SEP> ad <SEP> cache.<tb>FaceTimeUsedToday <SEP> This <SEP> is <SEP> the <SEP> amount <SEP> of <SEP> face <SEP> time <SEP> the<tb> client <SEP> has <SEP> used <SEP> toward <SEP> displaying <SEP> ads<tb> today. <SEP> It <SEP> can <SEP> be <SEP> used <SEP> by <SEP> the <SEP> server <SEP> to<tb> determine <SEP> whether <SEP> a <SEP> date-critical <SEP> ad<tb> can <SEP> be <SEP> shown <SEP> today.<tb>DistributorID <SEP> This <SEP> id <SEP> is <SEP> used <SEP> for <SEP> the <SEP> bounty <SEP> system,<tb> so <SEP> that <SEP> the <SEP> PlayList <SEP> Server <SEP> can<tb> identify <SEP> and <SEP> credit, <SEP> commission <SEP> or<tb> otherwise <SEP> reward <SEP> the <SEP> ISP <SEP> or <SEP> other<tb> organization <SEP> that <SEP> distributed <SEP> this<tb> copy <SEP> of <SEP> Eudora.<tb>Pastry <SEP> This <SEP> is <SEP> a <SEP> cookie <SEP> the <SEP> PlayList <SEP> Server<tb> gave <SEP> to <SEP> the <SEP> Eudora <SEP> e-mail <SEP> client <SEP> in <SEP> the<tb> past. <SEP> It <SEP> could <SEP> contain <SEP> any <SEP> state<tb> information/settings <SEP> the <SEP> server<tb> wishes <SEP> to <SEP> save.<tb> <tb>Profile <SEP> Profiling <SEP> information <SEP> originally<tb> entered <SEP> on <SEP> the <SEP> software <SEP> provider's<tb> web <SEP> page <SEP> and<tb> subsequently/concurrently <SEP> stored<tb> with <SEP> the <SEP> e-mail <SEP> client.<tb>Screen. <SEP> height <SEP> The <SEP> height <SEP> of <SEP> the <SEP> display <SEP> on <SEP> which <SEP> the<tb> ads <SEP> are <SEP> shown, <SEP> in <SEP> pixels.<tb>Screen. <SEP> width <SEP> The <SEP> width <SEP> of <SEP> the <SEP> display <SEP> on <SEP> which <SEP> the<tb> ads <SEP> are <SEP> shown, <SEP> in <SEP> pixels.<tb>Screen. <SEP> depth <SEP> The <SEP> color <SEP> depth <SEP> of <SEP> the <SEP> display <SEP> on<tb> which <SEP> the <SEP> ads <SEP> are <SEP> shown, <SEP> in<tb> colors/bits <SEP> per <SEP> pixel.<tb>PlayListVersionThe <SEP> version <SEP> # <SEP> of <SEP> the <SEP> PlayList <SEP> routine<tb> employed <SEP> by <SEP> this <SEP> particular <SEP> client.<tb> It will be appreciated that not all of these parameters are likely to be actively used at the same time; some are present to support particular modes of operation (see below), and will not be used in other modes. It should be mentioned here that every PlayList Request is checksummed with MD5. SeeRFC1321-"The MD5 Message-Digest Algorithm"at http://www. facs. org/rfcs/rfcl321. html. The PlayList server 302 preferably ignores requests that fail checksum verification. After the client makes a PlayList Request, the server 302 replies with aPlayList Response. Preferably, the PlayList Response is divided into two major sections; the ClientInfo section, which updates general client behavior regarding ads, i. e., speed with which the ads turn over, and the New PlayList itself, which describes the ads the client should fetch. It should be mentioned that the PlayList Server, e. g., server 302, may also return an empty response, meaning that the e-mail client should continue on its course with the ads it already has. It should also be mentioned that every PlayList Response is checksummed with MD5, just as the PlayList Request is. The MD5 digest is encoded in hexadecimal and put in a"CheckSum"header in the PlayListResponse. Advantageously, the e-mail clients ignore PlayLists that fail checksum verification.* Before describing the sections of the PlayList Response, it should be mentioned that the e-mail client sometimes becomes, for lack of a better term, befuddled due to old client bugs, server bugs, etc. Sometimes the bad data inherited by even an updated client is too garbled for the system to function properly. While the client could be programmed to detect this condition, it is preferable to leave the task, i. e., error detection, to the server, which can be changed more easily. Thus, when the server detects that a client is"befuddled," the PlayList server 302 responds with just a single command: reset. NoClientInfo should follow, no PlayList should follow, just the reset command.On receiving the reset command, the client discards its accumulated ad databases and records, including PlayLists, faceTime history, ad history, ad caches, etc. Everything is reset to the pristine condition that the e-mail client software had before the Adware software was run for the very first time. It should be mentioned that Link History is exempted from the reset command, both for reasons of practicality and because it is so user-visible. The only other item of ad data that reset does not affect is the ad failure counter, which should be retained across a reset. The client should then recognize that it has noPlayList, and make another request to the PlayList Server for the neededPlayList. The Clientlnfo section updates various client parameters. The parameters are listed immediately below. <tb>PARAMETER <SEP> DESCRIPTION<tb> ReqInterval <SEP> This <SEP> is <SEP> the <SEP> number <SEP> of <SEP> hours <SEP> the <SEP> client <SEP> should <SEP> wait <SEP> before<tb> checking <SEP> for <SEP> a <SEP> New <SEP> PlayList. <SEP> If <SEP> ad <SEP> turnover <SEP> is <SEP> high, <SEP> this<tb> will <SEP> be <SEP> a <SEP> small <SEP> number. <SEP> A <SEP> sponsored <SEP> freeware <SEP> version<tb> might <SEP> have <SEP> a <SEP> much <SEP> higher <SEP> number <SEP> here, <SEP> so <SEP> that <SEP> it <SEP> checked<tb> for <SEP> a <SEP> New <SEP> PlayList <SEP> only <SEP> once <SEP> a <SEP> week <SEP> or <SEP> once <SEP> a <SEP> month.<tb>Clients <SEP> may <SEP> also <SEP> check <SEP> for <SEP> New <SEP> PlayLists <SEP> if <SEP> they <SEP> have <SEP> ads<tb> with <SEP> nonzero <SEP> showForMax <SEP> values, <SEP> and <SEP> the <SEP> ads <SEP> have <SEP> used<tb> up <SEP> much <SEP> of <SEP> their <SEP> time.<tb>HistInterval <SEP> This <SEP> value <SEP> is <SEP> the <SEP> number <SEP> of <SEP> days <SEP> the <SEP> client <SEP> must <SEP> remember<tb> that <SEP> it <SEP> showed <SEP> a <SEP> particular <SEP> ad. <SEP> It <SEP> will <SEP> report <SEP> this <SEP> to <SEP> the<tb> PlayList <SEP> server <SEP> so <SEP> that <SEP> the <SEP> server <SEP> can, <SEP> at <SEP> its <SEP> discretion,<tb> choose <SEP> not <SEP> to <SEP> direct <SEP> the <SEP> showing <SEP> of <SEP> ads <SEP> for <SEP> competing<tb> services <SEP> to <SEP> that <SEP> particular <SEP> client, <SEP> competing <SEP> ads <SEP> are<tb> separated <SEP> from <SEP> one <SEP> another <SEP> by <SEP> the <SEP> HistInterval <SEP> value.<tb>Pastry <SEP> The <SEP> previously <SEP> mentioned <SEP> cookie. <SEP> The <SEP> server <SEP> can <SEP> store<tb> whatever <SEP> state <SEP> information <SEP> it <SEP> wishes <SEP> in <SEP> this <SEP> cookie.<tb>Flush <SEP> More <SEP> command <SEP> than <SEP> parameter, <SEP> if <SEP> present, <SEP> it <SEP> causes <SEP> the<tb> client <SEP> to <SEP> discard <SEP> an <SEP> old <SEP> PlayList <SEP> or <SEP> ad. <SEP> Flushed <SEP> ads <SEP> and<tb> PlayLists <SEP> are <SEP> removed <SEP> completely, <SEP> and <SEP> no <SEP> longer <SEP> show <SEP> up<tb> in <SEP> ad <SEP> histories.<tb>Width <SEP> The <SEP> width <SEP> in <SEP> pixels <SEP> the <SEP> client <SEP> should <SEP> make <SEP> the <SEP> ad <SEP> window<tb> be.<tb>Height <SEP> The <SEP> height <SEP> in <SEP> pixels <SEP> of <SEP> same.<tb>FacetimeQuota <SEP> The <SEP> number <SEP> of <SEP> seconds <SEP> of <SEP> facetime <SEP> the <SEP> client <SEP> should<tb> devote <SEP> to <SEP> regular <SEP> ads, <SEP> before <SEP> moving <SEP> to <SEP> the <SEP> runout <SEP> ad.<tb> <tb>RerunInterval <SEP> The <SEP> number <SEP> of <SEP> days <SEP> an <SEP> ad <SEP> may <SEP> be"rerun" <SEP> ; <SEP> that <SEP> is, <SEP> shown<tb> for <SEP> free <SEP> after <SEP> all <SEP> other <SEP> ads <SEP> and <SEP> the <SEP> runout <SEP> are <SEP> exhausted.<tb>The <SEP> time <SEP> is <SEP> measured <SEP> from <SEP> the <SEP> last <SEP> non-rerun <SEP> showing <SEP> of<tb> the <SEP> ad.<tb> From the discussion above, it will be appreciated that the ClientInfo section is a powerful feature of PlayLists. It allows the software provider to control the application in a global way, including segueing smoothly from one ad model to another. It will be appreciated that if this were the only benefit the software provider derived from PlayLists, it alone would make implementation of PlayLists worthwhile. As mentioned above, the PlayList Response is divided into two major sections; the ClientInfo section, which updates general client behaviors, and theNew PlayList itself, which describes the ads the client should fetch. The NewPlayList itself has one global value, PlayListID. This id is the id value that the client returns to the PlayList server the next time the client computer 100a connects to the PlayList server 302. It will be appreciated that this PlayListID advantageously can be included in the PlayList Request, or can be separately uploaded to the PlayList server in a myriad of forms, e. g., as a cookie. The remainder of the PlayList is a list of ads. Each ad is allowed to have many parameters, although it's likely not all of them will be used with any single ad, and it is possible that some of them will never be used at all. The parameters include the scheduling parameters, which are described in detail above, and ad information, which includes the information listed immediately below. PARAMETER <SEP> DESCRIPTION<tb> AdID <SEP> A <SEP> unique <SEP> identifier <SEP> for <SEP> the <SEP> ad <SEP> in <SEP> question. <SEP> A <SEP> 64-bit<tb> integer, <SEP> the <SEP> top <SEP> 32 <SEP> bits <SEP> of <SEP> which <SEP> are <SEP> a <SEP> server <SEP> authority<tb> id, <SEP> the <SEP> bottom <SEP> 32 <SEP> bits <SEP> of <SEP> which <SEP> are <SEP> an <SEP> identifier <SEP> unique<tb> to <SEP> the <SEP> server <SEP> authority.<tb>Title <SEP> A <SEP> human-friendly <SEP> string <SEP> used <SEP> to <SEP> refer <SEP> to <SEP> the <SEP> ad.<tb> <tb>Src <SEP> A <SEP> URN <SEP> indicating <SEP> where <SEP> to <SEP> get <SEP> the <SEP> actual <SEP> ad <SEP> to <SEP> show.<tb>This <SEP> might <SEP> be <SEP> highly <SEP> specific <SEP> (e. <SEP> g.,<tb> http <SEP> ://media48. <SEP> doubleclick. <SEP> net/eudora/coke/drinkcoke. <SEP> gif) <SEP> or <SEP> it<tb> might <SEP> be <SEP> much <SEP> more <SEP> general<tb> (e. <SEP> g., <SEP> http <SEP> ://ads. <SEP> doubleclick. <SEP> net/eudora/ad <SEP> ; <SEP> ord=136784421 <SEP> ?).<tb>Another <SEP> important <SEP> PlayList <SEP> feature <SEP> is <SEP> that <SEP> the <SEP> PlayList<tb> permits <SEP> the <SEP> client <SEP> software <SEP> to <SEP> pull <SEP> ads <SEP> from <SEP> many<tb> different <SEP> servers. <SEP> The <SEP> software <SEP> provider <SEP> could, <SEP> for<tb> example, <SEP> run <SEP> its <SEP> own <SEP> servers <SEP> in <SEP> parallel <SEP> with <SEP> those<tb> belonging <SEP> to <SEP> DoubleClick, <SEP> and <SEP> take <SEP> ads <SEP> from <SEP> each<tb> server, <SEP> or <SEP> some <SEP> of <SEP> the <SEP> servers, <SEP> based <SEP> on <SEP> the <SEP> PlayList.<tb>There <SEP> can <SEP> be <SEP> a <SEP> checksum <SEP> attribute <SEP> on <SEP> the <SEP> src <SEP> tag. <SEP> If<tb> present, <SEP> its <SEP> value <SEP> is <SEP> a <SEP> hexadecimal-encoded <SEP> MD5 <SEP> digest<tb> of <SEP> the <SEP> ad <SEP> data. <SEP> The <SEP> client <SEP> may <SEP> check <SEP> this <SEP> checksum<tb> against <SEP> the <SEP> ad <SEP> data.<tb>IsButton <SEP> Is <SEP> this"ad"a <SEP> toolbar <SEP> button? <SEP> If <SEP> so, <SEP> it <SEP> will <SEP> be <SEP> scheduled<tb> separately <SEP> from <SEP> the <SEP> main <SEP> ads. <SEP> The <SEP> only <SEP> scheduling<tb> parameters <SEP> that <SEP> are <SEP> meaningful <SEP> for <SEP> toolbar <SEP> buttons <SEP> are<tb> startDT <SEP> and <SEP> endDT.<tb>IsSponsor <SEP> Is <SEP> this"ad"a <SEP> sponsor <SEP> placard? <SEP> If <SEP> so, <SEP> it <SEP> will <SEP> be <SEP> scheduled<tb> separately <SEP> from <SEP> the <SEP> main <SEP> ads.<tb>IsRunout <SEP> Is <SEP> this <SEP> ad <SEP> intended <SEP> to <SEP> be <SEP> run <SEP> after <SEP> all <SEP> other <SEP> ads <SEP> have<tb> exhausted <SEP> their <SEP> runs <SEP> for <SEP> a <SEP> given <SEP> day? <SEP> There <SEP> will <SEP> only <SEP> be<tb> one <SEP> active <SEP> isRunout <SEP> ad <SEP> in <SEP> any <SEP> client's <SEP> collection <SEP> of<tb> PlayLists.<tb>URN <SEP> The <SEP> Uniform <SEP> Resource <SEP> Name <SEP> of <SEP> the <SEP> server <SEP> (e. <SEP> g., <SEP> a <SEP> Web<tb> site <SEP> address) <SEP> to <SEP> which <SEP> the <SEP> user <SEP> is <SEP> directed <SEP> when <SEP> he/she<tb> clicks <SEP> on <SEP> the <SEP> ad.<tb> It should be mentioned that the term Uniform Resource Name (URN) indicates a generic set of all names/addresses that are short strings that refer to resources available via the Internet. Thus, URN encompasses both a Uniform Resource Locator (URL), which is subset of URN schemes that have explicit instructions on how to access a particular resource on the Internet, and a Uniform Resource Identifier (URI), which is another subset of URNs. It will be appreciated that the URL and URI subsets may overlap. It will also be appreciated that the terms URN, URL, and URI advantageously can be used interchangably; whichever term is used is meant to address the named resource in its broadest possible sense. It has been mentioned in passing that not all parameters are likely to be used at one time. In fact, PlayLists are flexible enough to support many ad models. PlayLists are crucial to some ad models, to others they are helpful but not central, to still others they are marginally useful, but do not present significant impediments. The use of PlayLists does not predispose the software provider towards any specific ad model; the PlayLists advantageously can be used to support any ad models that the software provider chooses. Indeed,PlayLists permit the software provider to switch between ad models midstream, should the software provider decide to do so. In the discussion that follows, several ad models will be discussed with respect to Figs. 16A and 16B in an effort to illustrate how PlayLists would be used for each ad model. It will be appreciated that this will demonstrate the essential neutrality of the PlayList concept to the ad model. Fig. 16A illustrates the ad model associated with persistent ads while Fig.16B depicts the parameters associated with a short-lived ad model. One thing to notice here is how few of the parameters from any of the sections appear in the chart. It will be appreciated that varying as few as five parameters advantageously causes the Adware to shift between these two distinct ad modes. That's because they are largely not relevant to the choice of ad model.The parameters will either be used or not, irrespective of the ad model. For example, the software provider can implement blank space after an ad in any model, and the software provider can eschew blank space after an ad in any model. Most of the parameters fall into this it-just-doesn't-matter category. With respect to the short-lived ad model, it will be appreciated that the software provider accepts many ads; either from many advertisers or only a few advertisers. Ads do not persist for many days; they're used up and discarded at a relatively rapid rate. In this model, PlayLists will be used additively. Each time the client runs low on ads, it will ask for another PlayList which will describe a few more ads to mix with the clients'existing ads. When ads exceed their allotted time, the ads are discarded. In this ad model, the PlayList server really only serves to transmit parameters for ads. However, that is acceptable, since the parameters have to be transmitted somehow, after all. Suppose the software provider wants to mix ad models, e. g., desires to provide a mix of long-running ads and short-lived ads. How this situation is handled depends on the stoichiometry. If the cache is or will be filled with mostly persistent ads and only a few short-lived ones, the software provider can merely increase the reqInterval and use PlayLists as in the Persistent Ad Model.In other words, the software provider merely picks a few random ads to go on each PlayList, and picks a few more random ads to go on the next PlayList, which the client will fetch the next day. If, on the other hand, the cache will contain mostly short-lived ads and only a few persistent ads, the computer system 10 will use multiple PlayLists. One PlayList will list the persistent ads, as discussed above; the remaining facetime will be filled using PlayLists of short-lived ads. The above discussion illustrates how PlayLists can be used to support widely differing ad models. The reason PlayLists can do this is that they're really only an extra level of server control and in between Eudora and its ads. Given the importance of ads to Adware e-mail software, one of the software provider's key concerns is"what happens if the Adware does not receive ads ?" For example, users or ISPs may simply shut off the flow of ads toEudora by using firewalls or other means. Alternatively, the user may simply delete ads or Playlists (or both) from, for example, his/her computer on a random or periodic basis. If this happens, then users wil have no ads to display, i. e., the users get the full-featured version of Eudora without either seeing ads or paying. This would defeat one significant aspect of the exemplary software according to the present invention. On the other hand, users may have hardware or software problems or other issues that keep them from fetching ads, or the software provider's ad servers might even be down for some reason. Users should not be punished for this. The software provider will distinguish between these two situations by asking a simple question, i. e., is the user sending or receiving mail? If the answer is yes, the software provider will assume that the blocking of ads is something the software provider needs to address. The way the software provider addresses this issue is with an escalating series of Ad Failure Nags.These will continue for two weeks or until the software receives ads. For every two days the software does receive ads, the software will decrement the AdFailure Nag timer by one day. If the timer runs out, the software will display an apology to the user, revert to the Freeware version, and mark the user's software as owned by a Deadbeat User. Deadbeat Users will only be allowed to return to Adware if the ad server can be connected to at the time the user attempts to return to Adware. See Figs. 17A-17C. It should be noted that if the software provider should ever decide to retire Eudora and wish to let people use it without ads, the software provider can simply publish a permanent registration code. Alternatively, the e-mail client advantageously includes several more sophisticated functions for determining that an ad failure condition requires the employment of the Ad Failure Nag discussed above. For example, the client device can identify an ad download failure condition when a corresponding ad download function has failed to downloads during a predetermined period of time. In addition, the e-mail client device can identify an ad display failure condition when a corresponding ad display function has failed to display ads for a predetermined time period, e. g., the time (s) specified in the New PlayList received from the PlayList server and/or the current PlayList (s) stored for use by the e-mail client device. Either condition invokes the Ad Failure Nag function discussed above. One of the things the software provider will need to know is that the ads the software provider thinks are being displayed are actually being displayed, thus confirming that the ads are being displayed as frequently and for as long as the software provider thinks they are being displayed. It will be appreciated that this will be crucially important to maintaining credibility with advertisers.An exemplary audit scheme contains the following features: (Keep a rotating log of ad displays. This log will be rolled over once per week. The log will record ad-related events--when an ad was displayed, when it was removed, and when it was clicked on--in addition to other events, like cumulative face time in Eudora, cumulative run time, etc. At random, ask the user for permission to transmit the log. At a frequency of one out of every hundred users per month, ask for the user's permission to return the log to the software provider. If the permission is given, the log will be formatted in ASCII, placed in an outgoing message, and queued. The user will be given the opportunity to inspect and, if he/she desires, cancel the log collection. See Fig. 18A. (For selected users, deliver a pastry. In addition to the random send of the log, the software provider will also, at random, ask particular users for their permission to audit transactions in detail with the server. This will allow the software provider to correlate client and server behavior. Additional details on instrumentation applicable to the exemplaryEudora e-mail client software is provided in Figs. 18B-18E. The various state flow diagrams illustrated, for example, in Figs. 5A, 6A, 7A, 8 and 9, referred to a plurality of web pages, i. e., HTML pages that can be accessed and retrieved from one of the software provider's servers, e. g., registration server 301. See Fig. 1. The general purposes of these pages and theURNs which the software uses to access these pages will now be described in greater detail below. It will be appreciated that it will be helpful for the client to give the server information to help the server direct the user to the proper location or to assist the user by prefilling certain items on Web page based forms. That is the function of the query part of the URNs. The elements that might go in query parts are listed below. It will be noted that the query parts are divided into two groups. The first group includes items which are considered personal, and great care should be taken to transmit them only when appropriate; the second group includes items which are not considered to be privacy-sensitive. Realname <SEP> The <SEP> Real <SEP> Name <SEP> field <SEP> from <SEP> the <SEP> user's <SEP> Dominant <SEP> e<tb> mail <SEP> personality. <SEP> (EP4 <SEP> supports <SEP> multiple <SEP> e-mail<tb> personalities <SEP> for <SEP> IMAP4 <SEP> (both <SEP> POP3) <SEP> e-mail<tb> accounts.)I<tb> Regfirst <SEP> The <SEP> first <SEP> name <SEP> under <SEP> which <SEP> the <SEP> user <SEP> registered <SEP> last<tb> time <SEP> (if <SEP> any).<tb>Reglast <SEP> The <SEP> last <SEP> name <SEP> under <SEP> which <SEP> the <SEP> user <SEP> registered <SEP> last<tb> time <SEP> (if <SEP> any).<tb>Regcode <SEP> The <SEP> user's <SEP> current <SEP> Eudora <SEP> registration <SEP> code <SEP> (if <SEP> any).<tb>OldReg <SEP> The <SEP> user's <SEP> old-form <SEP> RegCode.<tb> e-mail <SEP> The <SEP> e-mail <SEP> address <SEP> from <SEP> the <SEP> user's <SEP> Dominant<tb> personality.<tb>Profile <SEP> The <SEP> profile <SEP> information <SEP> the <SEP> user <SEP> has <SEP> entered.<tb>Destination <SEP> This <SEP> is <SEP> the <SEP> URN <SEP> which <SEP> the <SEP> user <SEP> wishes <SEP> to <SEP> visit.<tb>Adid <SEP> This <SEP> is <SEP> the <SEP> id <SEP> of <SEP> an <SEP> ad <SEP> on <SEP> which <SEP> the <SEP> user <SEP> clicked.<tb>Platform <SEP> MacOS, <SEP> Windows, <SEP> Palm, <SEP> Nintendo <SEP> 64, <SEP> etc.<tb> Product <SEP> The <SEP> software <SEP> provider's <SEP> code <SEP> name <SEP> for <SEP> the <SEP> product<tb> being <SEP> registered. <SEP> Eudora, <SEP> PDQMail, <SEP> etc.<tb>Version <SEP> The <SEP> version <SEP> number <SEP> of <SEP> the <SEP> product <SEP> being <SEP> used <SEP> to<tb> register. <SEP> This <SEP> should <SEP> be <SEP> of <SEP> the <SEP> form<tb> Major. <SEP> Minor. <SEP> Bugfix. <SEP> Build.<tb> <tb>DistributorID <SEP> This <SEP> will <SEP> be <SEP> a <SEP> code <SEP> which <SEP> sites <SEP> may <SEP> apply <SEP> for, <SEP> which<tb> will, <SEP> in <SEP> turn, <SEP> allow <SEP> the <SEP> site, <SEP> i. <SEP> e., <SEP> its <SEP> controlling<tb> entities, <SEP> to <SEP> receive <SEP> a <SEP> continuing <SEP> revenue <SEP> stream <SEP> in<tb> return <SEP> for <SEP> providing <SEP> users <SEP> with <SEP> this <SEP> custom-branded<tb> copy <SEP> of <SEP> Eudora.<tb>Action <SEP> What <SEP> it <SEP> is <SEP> the <SEP> user <SEP> has <SEP> requested <SEP> to <SEP> do; <SEP> register, <SEP> pay,<tb> lostcode, <SEP> etc.<tb>Mode <SEP> Either <SEP> Payware, <SEP> Adware, <SEP> or <SEP> Freeware.<tb>Topic <SEP> Used <SEP> for <SEP> support <SEP> items, <SEP> this <SEP> tells <SEP> the <SEP> server <SEP> what<tb> particular <SEP> kind <SEP> of <SEP> support <SEP> is <SEP> needed.<tb> Typically, all of the software provider's non-ad URNs begin with: http ://jump. eudora. com/jump. cgi ? action=whatever The"action"value determines what function the user wishes to perform.The software provider then appends various other query parts to the URN, suitably %-escaped, i. e., separated by a percentage (%) or ampersand ( & ) symbol (for example), according to the chart illustrated in Fig. 19. A brief discussion of each type of web page referenced in Fig. 19 is provided immediately below. <tb>PAYMENT <SEP> This <SEP> web <SEP> page <SEP> should <SEP> take <SEP> the <SEP> user's <SEP> credit <SEP> card<tb> WEB <SEP> PAGE <SEP> info, <SEP> name, <SEP> e-mail <SEP> address, <SEP> and <SEP> whatever <SEP> other<tb> information <SEP> the <SEP> software <SEP> provider <SEP> wants <SEP> to<tb> compile <SEP> about <SEP> its <SEP> users. <SEP> It <SEP> will <SEP> also <SEP> ask <SEP> them <SEP> for <SEP> a<tb> question <SEP> and <SEP> answer <SEP> for <SEP> use <SEP> if <SEP> they <SEP> ever <SEP> lose<tb> their <SEP> payment <SEP> code. <SEP> It <SEP> should <SEP> return, <SEP> e. <SEP> g.,<tb> display <SEP> and <SEP> also <SEP> e-mail, <SEP> their <SEP> official <SEP> registration<tb> name <SEP> and <SEP> registration <SEP> code.<tb> <tb>FREEWARE <SEP> This <SEP> web <SEP> page <SEP> should <SEP> take <SEP> the <SEP> same <SEP> info <SEP> as <SEP> the<tb> REGISTRATION <SEP> Payment <SEP> web <SEP> page, <SEP> minus <SEP> the <SEP> credit <SEP> card<tb> WEB <SEP> PAGE <SEP> information. <SEP> It <SEP> should <SEP> send <SEP> back <SEP> (that <SEP> is, <SEP> display<tb> and <SEP> also <SEP> e-mail) <SEP> their <SEP> official <SEP> registration <SEP> name<tb> and <SEP> registration <SEP> code.<tb>ADWARE <SEP> This <SEP> web <SEP> page <SEP> should <SEP> take <SEP> the <SEP> same <SEP> info <SEP> as <SEP> the<tb> REGISTRATION <SEP> Payment <SEP> web <SEP> page, <SEP> minus <SEP> the <SEP> credit <SEP> card<tb> WEB <SEP> PAGE <SEP> information. <SEP> It <SEP> should <SEP> send <SEP> back <SEP> (that <SEP> is, <SEP> display<tb> and <SEP> also <SEP> e-mail) <SEP> their <SEP> official <SEP> registration <SEP> name<tb> and <SEP> registration <SEP> code.<tb>BOX <SEP> REGISTRATION <SEP> This <SEP> web <SEP> page <SEP> exists <SEP> to <SEP> accept <SEP> registrations<tb> WEB <SEP> PAGE <SEP> generated <SEP> by <SEP> Box <SEP> or <SEP> updater <SEP> installers. <SEP> It <SEP> should<tb> simply <SEP> accept <SEP> the <SEP> user's <SEP> code, <SEP> validate <SEP> it, <SEP> mail <SEP> it<tb> back, <SEP> and <SEP> display <SEP> a"thank <SEP> you <SEP> for <SEP> registering"<tb> page <SEP> or <SEP> dialog <SEP> box.<tb>LOST <SEP> CODE <SEP> This <SEP> web <SEP> page <SEP> helps <SEP> users <SEP> find <SEP> their <SEP> registration<tb> WEB <SEP> PAGE <SEP> codes. <SEP> When <SEP> they <SEP> register/pay, <SEP> they'll <SEP> be <SEP> asked<tb> to <SEP> provide <SEP> their <SEP> name, <SEP> e-mail <SEP> address, <SEP> and <SEP> a<tb> question <SEP> and <SEP> answer. <SEP> When <SEP> they <SEP> come <SEP> to <SEP> the <SEP> lost<tb> code <SEP> page, <SEP> they'll <SEP> be <SEP> asked <SEP> for <SEP> name <SEP> and<tb> address, <SEP> and <SEP> if <SEP> that <SEP> matches, <SEP> they'll <SEP> be <SEP> asked<tb> their <SEP> question. <SEP> If <SEP> all <SEP> that <SEP> goes <SEP> well, <SEP> their<tb> RegCode <SEP> will <SEP> be <SEP> mailed <SEP> to <SEP> them. <SEP> If <SEP> they <SEP> can't<tb> receive <SEP> mail, <SEP> they'll <SEP> have <SEP> to <SEP> call.<tb> <tb>UPDATE <SEP> This <SEP> web <SEP> page <SEP> should <SEP> list <SEP> the <SEP> updates <SEP> that <SEP> are<tb> WEB <SEP> PAGE <SEP> available <SEP> to <SEP> the <SEP> user. <SEP> Ideally, <SEP> it <SEP> would <SEP> list <SEP> only<tb> those <SEP> updates <SEP> the <SEP> user <SEP> does <SEP> not <SEP> already <SEP> have,<tb> and <SEP> clearly <SEP> indicate <SEP> which <SEP> updates <SEP> are <SEP> free <SEP> and<tb> which <SEP> updates <SEP> the <SEP> user <SEP> needs <SEP> to <SEP> pay <SEP> for. <SEP> This<tb> web <SEP> page <SEP> will <SEP> be <SEP> downloaded <SEP> to <SEP> the <SEP> user's<tb> system <SEP> from <SEP> time <SEP> to <SEP> time <SEP> and <SEP> displayed"off<tb> line"in <SEP> Eudora, <SEP> and <SEP> so <SEP> it <SEP> should <SEP> be <SEP> kept <SEP> small.<tb>ARCHIVED <SEP> VERSIONS <SEP> This <SEP> web <SEP> page <SEP> should <SEP> list <SEP> all <SEP> versions <SEP> of <SEP> Eudora,<tb> WEB <SEP> PAGE <SEP> so <SEP> that <SEP> users <SEP> can <SEP> download <SEP> whatever <SEP> they<tb> happen <SEP> to <SEP> need.<tb>PROFILE <SEP> WEB <SEP> PAGE <SEP> The <SEP> purpose <SEP> of <SEP> this <SEP> web <SEP> page <SEP> is <SEP> to <SEP> collect<tb> demographic <SEP> information <SEP> so <SEP> that <SEP> ads <SEP> delivered<tb> to <SEP> the <SEP> user <SEP> can <SEP> more <SEP> precisely <SEP> targeted <SEP> by<tb> advertisers. <SEP> At <SEP> this <SEP> page, <SEP> the <SEP> user <SEP> will <SEP> be <SEP> asked <SEP> a<tb> series <SEP> of <SEP> questions <SEP> about <SEP> his/her <SEP> personal<tb> preferences, <SEP> habits, <SEP> etc., <SEP> e. <SEP> g., <SEP> buying <SEP> habits,<tb> sleeping <SEP> habits, <SEP> preferences <SEP> in <SEP> clothing, <SEP> etc. <SEP> No<tb> information <SEP> identifying <SEP> the <SEP> user <SEP> is <SEP> to <SEP> be <SEP> collected<tb> on <SEP> this <SEP> page! <SEP> The <SEP> information <SEP> will <SEP> be <SEP> reduced <SEP> to<tb> a <SEP> cookie, <SEP> mailed <SEP> to <SEP> Eudora <SEP> and <SEP> stored <SEP> as <SEP> part <SEP> of<tb> the <SEP> user's <SEP> settings <SEP> in <SEP> the <SEP> Eudora <SEP> directory<tb> (folder). <SEP> The <SEP> procedure <SEP> for <SEP> accepting <SEP> a <SEP> profile <SEP> is<tb> the <SEP> same <SEP> as <SEP> the <SEP> procedure <SEP> for <SEP> accepting <SEP> a<tb> registration <SEP> code, <SEP> detailed <SEP> below.<tb>SUPPORT <SEP> WEB <SEP> PAGES <SEP> The <SEP> software <SEP> provider <SEP> will <SEP> need <SEP> several <SEP> web<tb> pages <SEP> for <SEP> resolving <SEP> user <SEP> problems. <SEP> For <SEP> these<tb> pages, <SEP> the <SEP> software <SEP> provider <SEP> will <SEP> use <SEP> the"topic"<tb> part <SEP> of <SEP> the <SEP> query <SEP> to <SEP> direct <SEP> users <SEP> to <SEP> situation<tb> specific <SEP> help <SEP> as <SEP> needed.<tb> Having discussed the client side of the overall system illustrated in Fig.1, is it now time to turn to the server side of the system. The network will not be discussed in detail, however, as it is something well known in the art. In particular, the PlayList Server (PLS) or Servlet, i. e., the applet responding to the PlayList Request, shall now be described in detail. The PLS is a server side program which services HTTP requests and returns HTTP responses. It will be appreciated that each request launches a different thread, and that the data format of communications between the client and the PLS isXML-encoded in the exemplary embodiment. The PLS advantageously can be instantiated using the following Java'packages. <tb>PKG <SEP> DESCRIPTION <SEP> USAGE<tb> XP <SEP> XP <SEP> is <SEP> an <SEP> XML <SEP> 1.0 <SEP> parser <SEP> written <SEP> in <SEP> The <SEP> PLS <SEP> uses <SEP> the<tb> Java. <SEP> The <SEP> parser <SEP> checks <SEP> a <SEP> given <SEP> XML <SEP> XP <SEP> parser <SEP> for:<tb> document <SEP> for <SEP> well-formedness <SEP> and <SEP> 1. <SEP> parsing <SEP> the<tb> validity. <SEP> Additional <SEP> information <SEP> is <SEP> client <SEP> request <SEP> to<tb> available <SEP> from <SEP> ensure <SEP> that <SEP> it <SEP> is<tb> http://www. <SEP> jclark. <SEP> com/xml/xp/. <SEP> valid.<tb>2. <SEP> parsing <SEP> the<tb> PlayList <SEP> Response<tb> to <SEP> ensure <SEP> that <SEP> it <SEP> is<tb> valid<tb> <tb> SAX <SEP> SAX <SEP> (Simple <SEP> API <SEP> for <SEP> XML) <SEP> is <SEP> a <SEP> The <SEP> PLS <SEP> uses <SEP> the<tb> standard <SEP> interface <SEP> for <SEP> event-based <SEP> SAX <SEP> interface <SEP> both<tb> XML <SEP> parsing, <SEP> the <SEP> parser <SEP> reads <SEP> the <SEP> in <SEP> the <SEP> XML<tb> XML <SEP> document <SEP> line <SEP> by <SEP> line <SEP> and <SEP> request <SEP> and <SEP> the<tb> initiates <SEP> events <SEP> that <SEP> contain <SEP> XML <SEP> response. <SEP> In<tb> information <SEP> about <SEP> the <SEP> line <SEP> that <SEP> was <SEP> the <SEP> request, <SEP> the<tb> just <SEP> read. <SEP> The <SEP> PLS <SEP> listens <SEP> to <SEP> particular <SEP> PLS"looks"for<tb> events <SEP> of <SEP> interest <SEP> and <SEP> extract <SEP> the <SEP> data <SEP> specific <SEP> tags <SEP> to<tb> from <SEP> the <SEP> XML <SEP> document <SEP> in <SEP> that <SEP> way. <SEP> build <SEP> the <SEP> request<tb> Additional <SEP> information <SEP> is <SEP> available <SEP> object. <SEP> In <SEP> the<tb> from <SEP> response, <SEP> the <SEP> PLS<tb> http://www. <SEP> megginson. <SEP> com/SAX/sends <SEP> events <SEP> to<tb> generate <SEP> the<tb> PlayList <SEP> XML<tb> response.<tb> <tb>MM. <SEP> MySQL <SEP> MM. <SEP> MySQL <SEP> is <SEP> a <SEP> Java <SEP> Database <SEP> The <SEP> PLS <SEP> use <SEP> the<tb> Connectivity <SEP> (JDBC) <SEP> Type-4 <SEP> driver, <SEP> JDBC <SEP> methods <SEP> to:<tb> i. <SEP> e., <SEP> an <SEP> all-Java <SEP> driver <SEP> that <SEP> issues <SEP> 1. <SEP> Establish<tb> requests <SEP> directly <SEP> to <SEP> the <SEP> PlayList <SEP> connection <SEP> (s) <SEP> to<tb> server <SEP> database. <SEP> It <SEP> will <SEP> be <SEP> appreciated <SEP> communicate <SEP> with<tb> that <SEP> this <SEP> is <SEP> the <SEP> most <SEP> efficient <SEP> method <SEP> the <SEP> database <SEP> using<tb> of <SEP> accessing <SEP> the <SEP> database. <SEP> The <SEP> JDBC <SEP> JDBC. <SEP> PLS <SEP> first<tb> API <SEP> is <SEP> made <SEP> up <SEP> of <SEP> classes <SEP> and <SEP> establishes <SEP> a<tb> interfaces <SEP> found <SEP> in <SEP> the <SEP> Java. <SEP> sql <SEP> and <SEP> connection<tb> Java. <SEP> text <SEP> packages. <SEP> Additional <SEP> through <SEP> the<tb> information <SEP> is <SEP> available <SEP> at <SEP> appropriate <SEP> JDBC<tb> http://www. <SEP> worldserver. <SEP> com/mm. <SEP> driver. <SEP> The<tb> mysql/connection <SEP> object<tb> can <SEP> be <SEP> used <SEP> to<tb> perform <SEP> all<tb> operations <SEP> on <SEP> the<tb> given <SEP> database. <SEP> In<tb> an <SEP> exemplary<tb> case, <SEP> the <SEP> PLS <SEP> will<tb> create <SEP> a <SEP> pool <SEP> of<tb> connection <SEP> objects<tb> during <SEP> the <SEP> Servlet<tb> initialization.<tb>2. <SEP> Execute <SEP> SQL<tb> statements <SEP> and<tb> retrieve <SEP> results <SEP> the<tb> PLS <SEP> performs <SEP> a<tb> SQL <SEP> query <SEP> to <SEP> the<tb> database <SEP> using<tb> both <SEP> Statement<tb> and <SEP> Prepared<tb> Statement <SEP> objects.<tb> What follows is an explanation of task flow in the PLS when the Servlet doPost method is invoked. See Fig. 20. The PLS parses the XML request and builds objects that represents the client update request. It will be noted that data access is performed using SAX. When logging the client request, the PLS stores the client request information in a so-called ClientUpdate table (not shown). It will be appreciated that the PlayList Request can be received from a plurality of e-mail clients residing on the client computers generally denoted 100n through any given day. When issuing the same SQL statement repeatedly, it will be appreciated that it is more efficient to use a Prepared Statement rather then generating a new Statement in response to a query. In the logging operation, the software provider advantageously can employ the following semantic to avoid repetitive Statement generation: PreparedStatement ps = conn. prepareStatement ("INSERT INTO ClientUpdate (date, userAgent, PlayListId, Y) values ( ?, ?,?,?,..)"); It should be mentioned that in generating a New PlayList, the Servlet advantageously can employ both SQL queries and programming filtering. It will also be appreciated that these processes are synchronized in order to prevent conflicts when accessing the database. Appropriate pseudo code of generating a PlayList is depicted in Figs. 21A and 21B. The first block of pseudo code in Fig. 21A generates an ad list. It will be appreciated that the ad list generated by the first block of pseudo code holds all the image ads that are active and can be delivered within a predetermined time frame. The second block of pseudo code listed in Fig. 21A calculates the time needed to deliver the ads. The third block of pseudo code, which is illustrated in Fig. 21B, determines additional ads which can be used to fill the available facetime. In other words, if the e-mail client software has remaining time to fill, the generated PlayList will automatically fill the available time with runout ads, i. e., find a run out ad which is not in the ads history and which also fits into the Goal show time left. When generating XML, it is often useful to generate comments, processing instructions, and so on. The package XP Writer provides a set of methods for creating specific kinds of nodes in the output XML code, i. e., file.The following is a short list of methods PLS employs in generating the XML output. (Starts an element-start-tag Ends an element-end-tag or close the current start-tag as an empty element. Attribute add attributes to a tag name value pair format Comments writes a comment The PLS stores the information generated in response to a request in two tables, a PlayList general response table, which holds the client info section and PlayList general information, and a PlayList specific response, which holds the entry section. It will be appreciated that the PLS advantageously can use the prepared statement API to optimize performance in response to a query. Referring again to Fig. 20, that figure illustrates a class diagram which advantageously describes the representation and rendering of the PlayList, as will as the PlayList Response. It will be appreciated that this class diagram includes repeated XML Write method calls; these method calls are employed byPLS to generate the XML tags associated with the PlayList. Turning now to Fig. 22, that figure illustrates the major PlayList ServletClasses, which collectively define the PlayList Servlet. More specifically, thePlayList Request class handles the request and subsequently maps the XML request to the clientUpdate object while the PlayListResponse class handles the response and writes the clientUpdateResponse back to the client. In addition, the PlayListsGenerate class generates the PlayLists while the DBManager class handles the Data Base connection pool. Additional details are readily apparent from Fig. 22. It will be appreciated from Fig. 23 that all of the storage operations employing the database advantageously can be threaded. As mentioned above, all actions with respect to the database are performed the MM. MySQL package. In summary, one exemplary embodiment of the present invention encompasses software for converting a general purpose computer into a specialized PlayList server for supplying a PlayList Response to a client device for exchanging information with an information server system over a communications network and storing ads. More specifically, the software instantiates a PlayList Response generation function for generating a PlayListResponse identifying a plurality of selected ads to be presented by the client device, and a first communications function that completes a PlayList Response send communication link with the client device via the communications network over which the PlayList Response is transmitted to the client device, wherein the information server system and the PlayList server are independently controlled. It will be appreciated that, while the PlayList directs the presentation, e. g., display, of ads on the client device, e. g., an e-mail client, the ads advantageously may be delivered to or retrieved by the client device in any number of ways in this preferred embodiment. In this exemplary embodiment, the PlayList Request preferably includes ad identifiers and ad presentation instructions; corresponding uniform resource names (URNs) can be included but may be omitted. According to another exemplary embodiment, the present invention encompasses software for converting a general purpose computer into a specialized PlayList server for supplying a PlayList Response to a client device exchanging information with an information server system and receiving ads from an ad server over a communications network. The software advantageously includes a PlayList Response generation function for generating a PlayList Response identifying a plurality of selected ads to be presented by the client device, and a first communications function that effects a PlayList Response send communication link with the client device via the communications network over which the PlayList Response is transmitted to the client device. Preferably, the information server system and the PlayList server are independently controlled. It will be appreciated that this exemplary and non-limiting embodiment of the present invention contemplates a specific communications channel between the client device and a dedicated ad server (system) for delivery of ads defined by the PlayList. It will also be appreciated that the PlayList Request employed by this exemplary embodiment includes both information dictating presentation of the ads and/or operation of the client device with respect to ad presentation functions, and the name and URN for ads included in a New PlayList. According to yet another exemplary embodiment, the present invention provides software for converting a general purpose computer into a specializedPlayList server for supplying a PlayList Response to a client device exchanging information with an information server system and receiving ads from an ad server over a communications network, including: (a PlayList Response generation function for generating a PlayList Response identifying a plurality of selected ads to be presented by the client device, (a PlayList Request parsing function for extracting selected information from the PlayList Request; (a PlayList generation function receiving an output of the database driver function for generating a PlayList for inclusion in the PlayList Response which identifies a plurality of selected ads to be presented by the client device in response to receipt of a PlayList Request, (a selected information supply function for supplying the selected information to the PlayList Response generation function to thereby initiate thePlayList generation function, (a first communications function that effects a PlayList Response send communication link with the client device via the communications network over which the PlayList Response is transmitted to the client device, and (a second communication function that effects a PlayList Request receive function with the client device via the communications network, wherein the information server system and the PlayList server are independently controlled. Preferably, the PlayList Request parsing function includes an extensible markup language (XML) parsing function for verifying the wellformedness of the PlayList Request, a PlayList analysis function receiving the PlayList Request after verification by the XML parsing function for generating an object, and a database driver function receiving the object for building a query from the object and applying the query to a PlayList server database. It should be noted that the PlayList Response generation function is initiated by receipt of a PlayList Request, which, in an exemplary case, includes the name of the current PlayList (s) employed by the client device providing thePlayList Request. While each of the numerous client devices connected to an information server generate a PlayList Request, the discussion of this specific aspect of the present invention, i. e., the PlayList server, can best be understood from the point of view of a system including only one client device; the actual implementation of the, for example, e-mail client device contemplates the use of thousands of client devices. The PlayList Request advantageously can include information regarding the currently running PlayList (s) on the client device, and user data fields that store data regarding the progress made by the client device in presenting, e. g., displaying, the ads stored by the client device. An exemplary and non-limiting list of the information that can be provided to the PlayList server via thePlayList Request includes: a first user data field identifying a current PlayList; (a second user data field identifying user demographic data; a third user data field identifying user/client device behavior data; (a fourth user data field identifying usage history of the client device; a fifth user data field identifying the respective software operating on the client device; (a sixth user data field identifying the respective operating system of the client device; (a seventh user data field identifying the amount of time the user has used client device over a prescribed time interval; (an eighth user data field identifying the total amount of display time required for the stored ads that remain to presented by the client device; (a ninth user data field identifying the total amount of times that ads were presented by the client device during the prescribed time interval ; (a tenth user data field identifying the dimensions of a display screen associated with the client device; and (a list of the ad identifiers corresponding to advertisements that have been displayed in the prescribed most recent time interval. Advantageously, the PlayList Request parsing function can extract selected information from the PlayList Request and employ the selected information and other information, e. g., information provided by the entity controlling the PlayList server, in generating the PlayList Response. It will be appreciated that the PlayList Request may include all or a subset of the information listed immediately above; the PlayList Request parsing function extracts information contained in at least one of the user data fields. In any event, the receipt of the PlayList Request by the PlayList server initiates generation of the PlayList Response. In response to the PlayList Request, the PlayList Response generation function generates one of an action command and the PlayList Response. With respect to the former, the PlayList Response generation function advantageously can generate the action command in response to receipt of a garbled PlayList Request. This can be generally thought of as an error code directing the client device to send a New PlayList Request. It will be appreciated that the action command can include an associated error message, which is presentable to the user by the client device. Alternatively, the action command may cause the client device to delete all of the ads received and/or stored by the client device responsive to a command issued to the PlayList server by an entity controlling the PlayList server. In other words, there are times when the software provider may wish to flush the existing ads; the entity controlling the PlayList server, e. g., the software provider, sends a command to the PlayList server, which command causes the PlayList server to respond with a flush command to either specific PlayList Requests, e. g., PlayList Requests generated by a particular software version, of all PlayList Requests. With respect to the latter, a detailed discussion follows. As discussed above, the PlayList Response advantageously includes both client information, information regarding how the client device, e. g., a PDA device, is to present, e. g., display, the selected ads, i. e., the ads that during the time period following receipt of the PlayList Response by the client device, and a New PlayList. For example, selected parameters included in the client information advantageously can switch the client device from between a persistent presentation mode and a short-lived presentation mode of presenting the ads. The client information can, in an exemplary case: control the turnover rate of the ads presented by the client device; specify the periodicity at which the client device generates thePlayList Request; (establish a minimum time separation between competing ones of the ads; (establish specifications directing the manner in which the client device is to present each of the ads. For example, when the ads available to the client device include both current ads (paid ads) and expired ads (free ads), the client information includes a minimum time period during which the client device presents the current ads before the client device presents the expired ads. The client information may also establish a maximum time period during which the client device is permitted to present the expired ads. In any event, the PlayList Response advantageously may include commands or selected parameters which direct the client device to either concatenate the New PlayList to the current PlayList (s) or discard the current PlayList (s) in favor of the NewPlayList. The command, or the selected parameters, controlling this facet of the client device operation is executed upon receipt of the PlayList Response by the client device over the effected communications link. The New PlayList included in the PlayList Response includes a name and a corresponding Uniform Resource Name (URN) for each of the selected ads. It will be appreciated that the URN can correspond to one of a storage location of the respective named ad on an ad server or a location on the ad server redirecting the client device to a location on another storage device for the respective named ad. Alternatively, the URN specifies a location on the ad server redirecting the client device to an ad storage location collocated on the ad server for the respective named ad. It should be mentioned at this point that, in addition to the name and URN of each of the selected ads, the New PlayList may also include information identifying an ad type, i. e., postage stamp ad, toolbar ad, or placard ad, for each one of the respective selected ads. It should be noted that in at least one exemplary embodiment of the present invention, the PlayList server instantiated by software stored on the server computer 302 advantageously responds to a PlayList Request written, i. e., coded, in extensible markup language (XML). One of ordinary skill in the art of documents generated in XML will appreciate that these documents, e. g., the PlayList Request, advantageously can have an associated document type definition (DTD). In order to optimize system performance, the PlayList server should have the DTD available, i. e., available to the PlayList Request parsing function. There are several options for ensuring that the DTD is available to thePlayList server. First, the DTD for each of different types of client devices, e. g., e-mail client device or PDA, is stored by the PlayList server. In that case, thePlayList Request need only include a DTD tag, which identifies the particularDTD to be employed by the PlayList Request parsing function. Second, theDTD advantageously can be embedded in the PlayList Request. In either case, both the PlayList server and the client device implicitly use the same DTD. It should be mentioned that the software provider should make provisions with respect to ad security. There are really two security issues to consider. One is whether or not the client is getting valid ads (call this client security), and the second is whether or not a valid client is fetching ads (call this server security). Client security is of relatively small importance. If a given person manages to trick Eudora into displaying some ads other than those transmitted by the software provider, it probably doesn't matter a great deal. This is not to say that it could not become problematic if large numbers of clients at one or more sites began doing it; however, a carefully worded license agreement should make at least large sites avoid actions which would cause this particular problem. However, to avoid trivial attacks, PlayLists and ads advantageously can be checksummed with MD5 (or another mechanism), and the checksums recorded in the PlayList. Then the client can checksum the PlayList and ads using the same secret seed, and compare its checksums to those in the PlayList.If it fails to get the proper ads, this will be treated as a failure to get ads at all. Server-side security is potentially a much bigger problem. The software provider intends to charge advertisers for ads, based on the understanding that the software provider's users will actually see the ads the software provider is charging for. To do this with confidence, the software provider should ascertain that it is actually Eudora that is downloading the ads, and not some rogue process written to fetch many ads. Why would someone bother to fetch ads?While the software provider can't discount the"because they can"motivation of the amateur hacker, the real issue is the ad revenue, i. e., ad bounty. Because every ad fetch can generate revenue for a third party, there is a very significant financial incentive for that third party to cause a lot of ad fetches. It thus becomes imperative that the software provider prevent (and/or detect) ad fetches not made by copies of Eudora. Given that such fetches may be in violation of the agreement the software provider signed with the distributor, these fetches could constitute a form of fraud. There are several different approaches to fraud detection which advantageously can be implemented in the software running, for example, onAd server 303. Whatever method the software provider eventually uses to prevent fraud, it will be important also to detect fraud should it occur. There are two broad classes of fraud detection; authentication and statistical analysis. Authentication is easily understood; if the program fetching the ads fails to prove that it is a valid copy of Eudora, the software provider will be alerted to possible fraud. However, authentication provides challenges of its own, and may be impossible or impractical or simply unnecessary. Statistical analysis has some significant benefits, but also significant drawbacks. The benefits include minimal work in the client (and hence no vulnerability to disassembly, etc.), no run-time burdens on either the client or the server, i. e., everything can be done"after the fact"during accounting runs, easily changeable from the software provider's end, ability to be applied retroactively, etc. The drawbacks to statistical analysis include that statistical analysis will never be entirely certain, and that the software provider may not collect the proper statistics, etc. A listing of parameters or statistical measures that the software provider may gather or compute is presented immediately below. ClientID <SEP> It's <SEP> hard <SEP> to <SEP> see <SEP> a <SEP> way <SEP> to <SEP> avoid <SEP> generating <SEP> some <SEP> sort <SEP> of<tb> client <SEP> id <SEP> for <SEP> use <SEP> with <SEP> fetching <SEP> ads. <SEP> The <SEP> software <SEP> provider<tb> might <SEP> hope <SEP> that <SEP> such <SEP> identifiers <SEP> will <SEP> be <SEP> self-validating, <SEP> but<tb> it <SEP> is <SEP> preferable <SEP> that <SEP> the <SEP> software <SEP> provider <SEP> needs <SEP> to <SEP> know<tb> what <SEP> particular <SEP> installation <SEP> of <SEP> Eudora <SEP> is <SEP> actually <SEP> fetching<tb> ads. <SEP> This <SEP> can <SEP> then <SEP> be <SEP> used <SEP> in <SEP> compiling <SEP> statistics <SEP> and<tb> performing <SEP> computations. <SEP> By"installation"the <SEP> software<tb> provider <SEP> means <SEP> a <SEP> single <SEP> storage <SEP> system <SEP> directory <SEP> (PC) <SEP> or<tb> folder <SEP> (Mac) <SEP> with <SEP> a <SEP> Eudora <SEP> mail <SEP> structure <SEP> in <SEP> it, <SEP> i. <SEP> e., <SEP> data<tb> interchanged <SEP> between <SEP> the <SEP> e-mail <SEP> client <SEP> and <SEP> at <SEP> least <SEP> one<tb> server <SEP> and <SEP> not <SEP> necessarily <SEP> the <SEP> e-mail <SEP> client <SEP> itself, <SEP> per <SEP> se.<tb>IpAddress <SEP> The <SEP> software <SEP> provider <SEP> will <SEP> likely <SEP> want <SEP> to <SEP> log <SEP> requests <SEP> by<tb> the <SEP> IP <SEP> address <SEP> of <SEP> the <SEP> originating <SEP> e-mail <SEP> client.<tb> <tb>DistributorID <SEP> Of <SEP> course <SEP> a <SEP> cornerstone <SEP> of <SEP> the <SEP> referral <SEP> payment <SEP> system <SEP> is<tb> the <SEP> fact <SEP> that <SEP> the <SEP> software <SEP> provider <SEP> will <SEP> record <SEP> the<tb> distributor <SEP> ID <SEP> for <SEP> the <SEP> client <SEP> fetching <SEP> ads. <SEP> The <SEP> software<tb> provider <SEP> should <SEP> collect <SEP> this <SEP> when <SEP> users <SEP> pay <SEP> or <SEP> even<tb> register <SEP> the <SEP> software.<tb>NumPaidUsers <SEP> This <SEP> statistic <SEP> is <SEP> the <SEP> number <SEP> of <SEP> paid <SEP> users <SEP> with <SEP> a <SEP> given<tb> distributor <SEP> ID.<tb>NumClientIDs <SEP> This <SEP> statistic <SEP> is <SEP> the <SEP> number <SEP> of <SEP> client <SEP> ID's <SEP> with <SEP> a <SEP> given<tb> distributor <SEP> ID.<tb>NumAdsFetched <SEP> The <SEP> number <SEP> of <SEP> ads <SEP> fetched <SEP> by <SEP> a <SEP> particular <SEP> client <SEP> ID.<tb> Given the raw data available from monitoring the parameters listed above, the following is an exemplary and non-inclusive list of possible statistical measures which can be generated. NumAdsFetched <SEP> A <SEP> client <SEP> ID <SEP> with <SEP> a <SEP> very <SEP> high <SEP> number <SEP> of <SEP> ads <SEP> fetched <SEP> is<tb> suspicious.<tb>NumClientIDs/Paid <SEP> users <SEP> is <SEP> a <SEP> very <SEP> hard <SEP> number, <SEP> because <SEP> the <SEP> software<tb> NumPaidUsers <SEP> provider <SEP> will <SEP> have <SEP> collected <SEP> credit <SEP> card <SEP> information <SEP> and<tb> charged <SEP> against <SEP> this <SEP> card. <SEP> Thus, <SEP> it <SEP> can <SEP> serve <SEP> as <SEP> a <SEP> useful<tb> measuring <SEP> stick <SEP> for <SEP> how <SEP> many <SEP> clients <SEP> the <SEP> software<tb> provider <SEP> can <SEP> expect. <SEP> A <SEP> particular <SEP> distributor <SEP> with <SEP> a <SEP> very<tb> high <SEP> ratio <SEP> or <SEP> a <SEP> ratio <SEP> that <SEP> suddenly <SEP> goes <SEP> higher <SEP> bears<tb> investigation.I<tb> One of the issues which the software provider must be very cognizant of is the protection of the user's privacy, i. e., the user generally does not want to receive ads based on information that the user unknowingly submitted to the software provider. There is an extremely vocal and paranoid subset of the user community, who object to practically all forms of information gathering, even the most benign. Even relatively innocent devices like serial numbers are considered something to be completely avoided. While the serial number of a software program may seem like a trivial matter to the software supplier, users who object to this type of"tagging"exist, and the software provider should be cognizant of such users. In order to avoid such concerns to the maximum extent possible, the software provider should adopt a Confidential Information Policy which includes the following provisions: Obtain Permission-Before the software provider gathers or transmits any data that might identify the user to the advertiser, the software provider should obtain the user's explicit (See Fig. 18A) or near-explicit permission. The term near-explicit is employed to denote that the software provider may, for example, put a special privacy warning in the web page where the user registers a software program such as Eudora. Here, the user is clearly taking an action to submit data to the software provider; as such, explicit permission shouldn't be needed. On the other hand, the software provider should go out of its way to identify areas where an unreasonable user might be able to claim that he/she didn't know he/she was giving information to the software provider, and ask for explicit permission there, even if it seems relatively obvious to the software provider. (Data Separation-Insofar as possible, the software provider should maintain payment information separate from registration information, and both types of information should be maintained separate from demographic information, etc. While it may be very tempting to correlate databases, the software provider faces potential crucifixion if the databases are actually correlated. Moreover, since the software provider can still deliver very targeted advertising without database correlation, the software provider should maintain separate databases. (User Verifiability-Insofar as possible, protections established by the software provider should be verifiable by end users with packet sniffers.The software provider may even encourage the practice of watching the software's, e. g., Eudora's, actions. It is one thing to say"The software provider does not give your personal data to advertisers ;" it is quite another for the user to be able to verify that this is the case. Strong Public and Private Commitment-The software provider needs to be clear and public with its privacy policies, and the software provider needs to respect them internally. If the software provider merely views privacy as something the software provider must do to avoid adverse press coverage, the software provider will do it poorly and wind up in trouble. In summary, the present invention encompasses a multi-moded software product, e. g., e-mail software, which includes three"self-contained"different versions (or,"modes"), including-a"first full feature set"version which is activated when the software product is paid for by the user (i. e., a"Payware version"), a"second full feature set"version which is activated when the user agrees (e. g., either by default or by explicit agreement) to accept advertisements delivered to the client device in order to subsidize the software product (i. e., an "Adware"version), and a"reduced feature set"version which is activated when the software product is not paid for (i. e., a"freeware"version) and the"second full feature set"version is not activated. The present invention also encompasses a system and method for automatically distributing advertisements to a multiplicity of client devices that have such multi-moded software installed thereon. It will be appreciated that the first and second full feature sets are identical with respect to e-mail support features; it will also be appreciate that the second full feature set includes PlayList and ad fetching and display features which are dormant in the first full feature set. Moreover, the present invention further encompasses multi-moded software as set forth above, wherein the multi-moded software includes a mode switching function which automatically switches from the"Adware"version to the"freeware"version upon detecting a prescribed condition (e. g., based upon monitored user activity level, and/or less than a prescribed number of ads having been downloaded, i. e.,"deadbeat user"criterion). The present invention also encompasses a system and method for automatically distributing advertisements to a multiplicity of client devices that have such multi-moded software installed thereon. It will be appreciated from the discussion above that the present invention further encompasses multi-moded software as set forth above, wherein the multi-moded software includes a mode switching function which automatically switches from the"Adware"version to the"freeware"version upon detecting occurrence of a prescribed"ad failure condition", e. g., less than a prescribed number of ads having been received and/or displayed by the client device within a prescribed time period, and an"Ad Failure Nag"function which monitors"time since last Nag"and which generates an"Ad Failure Nag" according to a"Nag Schedule"which is dynamically varied based on the monitored"time since last Nag"information and/or based on cumulative ad download/display statistics or information. The present invention also encompasses a system (and method) for automatically distributing advertisements to a multiplicity of client devices that have this multi-moded software product installed thereon. In one exemplary embodiment, the present invention further encompasses multi-moded software as set forth above, wherein the multi moded software includes a Nag function which generates different types ofNags dependent upon the current mode of the software product which is currently activated, and/or based upon time since the last Nag was generated, and/or based on cumulative ad download/display statistics or information, and/or based on other monitored conditions. For example, the different types of Nags could include a"Registration Nag", a"Payware Nag", an"AdwareNag", an"Update Nag", and an"Ad Failure Nag". The present invention also encompasses a system (and method) for automatically distributing advertisements to a multiplicity of client devices that have this multi-moded software product installed thereon. In another exemplary embodiment, the present invention encompasses a software product (e. g., e-mail software) that incorporates an automatic advertisement download function for automatically downloading advertisements to be displayed when the software is activated, and a control function for monitoring user activity levels and for controlling the display of downloaded advertisements at the client device based upon the monitored user activity levels (e. g., based upon"discrete"and/or"cumulative"ad display parameters). The present invention also encompasses a system and method for automatically distributing advertisements to a multiplicity of client devices that have this software product installed thereon. The present invention also encompasses an e-mail software product that incorporates a control function for automatically downloading advertisements from a remote server system which is separate and independent from the e-mail server system, as well as the system and method for automatically distributing the advertisements to client devices which have this e-mail software product installed thereon. In particular, the system includes an ad server system that manages, administers, and controls the distribution of advertisements, and which is controlled by a control entity (e. g., one operated by the present assignee, QUALCOMM INCORPORATED) which is separate and independent from the control entity which controls the e-mail server system which provides e-mail services to any particular client device which has this-e-mail software product installed thereon. Thus, in sharp contrast to the Juno Online Services system, in accordance with this aspect of the present invention, the ad server system and the e-mail server system are operated independently, i. e., under the control of separate and independent control entities. Advantageously, the present invention also encompasses a software product, e. g., e-mail software, which incorporates an automatic advertisement files download function for automatically downloading advertisements from a remote server system to a client device on which the software product is installed, and a control function for locally controlling the display of downloaded advertisements at the client device based upon ad parameters included in the downloaded advertisement files, e. g., including (for each ad), various combinations and sub-combinations of the following ad parameters, namely, the maximum ad display time, or face time, for any given display of that particular ad, the maximum total/cumulative ad display time, or face time, for that particular ad, the maximum number of times to display that particular ad per day, the date/time before which that particular ad should not run, and the date/time after which that particular ad should not run. The present invention also encompasses a system and method for automatically distributing advertisements to a multiplicity of client devices that have this software product installed thereon. It will be appreciated that the present invention also encompasses a software product, e. g., e-mail software, which incorporates an automatic advertisement download function which fetches a PlayList from a remote server system (e. g., a PlayList server system) which specifies the advertisements to be fetched by the client device on which the software product is installed and the source addresses (e. g., URNs) of the ad servers on which the specified advertisements are stored, fetches the advertisements specified in the fetchedPlayList, and stores the fetched advertisements on the client device. The present invention further encompasses a system and method for distributing advertisements to client devices which have this software product installed thereon, including a PlayList server (or PlayList server system) which, in response to a PlayList Request from a particular client device that includes a client PlayList identifier, compares a client PlayList identified by the client PlayList identifier with a current PlayList (which may optionally be customized to that particular client device) stored on the PlayList server, and then sends back to the client device a New PlayList which specifies the new advertisements to be fetched by the client device, and the source addresses of the ad servers on which the specified new advertisements are stored. Optionally, the above-described automatic advertisement download function of the software product installed on the client device can delete (discard) all or PlayList server-specified ones of the advertisements which are currently stored on the client device, e. g., those which are not specified in the current PlayList; and/or the above-described automatic advertisement download function of the software product installed on the client device can merge the New PlayList with the current client PlayList. The present invention also encompasses several variations and details of implementation of this novelPlayList/ad fetch process utilized in the Eudora Adware scheme. Moreover, the present invention encompasses a software product, e. g., email software, which incorporates a custom installer which identifies the specific software product distributor that distributed that software product.The present invention further encompasses a software product, e. g., e-mail software, which incorporates an automatic advertisement download function for automatically downloading advertisements from a remote server system to a client device on which the software product is installed, and a custom installer which identifies the specific software product distributor which distributed that software product, for the purpose of facilitating apportionment of advertising revenue the software product vendor receives from advertisers to specific software product distributors. The present invention also encompasses a system (and method) for automatically distributing advertisements to a multiplicity of client devices which have this software product installed thereon, wherein the system includes a centralized control facility which receives software product distributor ID information from the client devices and uses this software product distributor ID information to facilitate apportionment of advertising revenue the software product vendor receives from advertisers to specific software product distributors. Alternatively, or additionally, a central database function which identifies (e. g., by means of cross-referencing and/or correlation tables) the software product distributor ID for each software product distributed by the software vendor, e. g., based on a serial number or reference code associated with each copy of the software product, can be utilized. Furthermore, the present invention encompasses a software product, e. g., e-mail software, that incorporates an automatic advertisement download function for automatically downloading advertisements from a remote server system to a client device on which the software product is installed, and a control function which utilizes a built-in"deadman timer"to impose a time limit for each particular advertisement download session, e. g., the client device will be disconnected from the remote server system upon expiration of the time limit imposed by the"deadman timer". The present invention also encompasses a system (and method) for automatically distributing advertisements to a multiplicity of client devices that have this software product installed thereon. It will also be appreciated that the present invention can be characterized as a software product, e. g., e-mail software, that incorporates an automatic advertisement download function for automatically downloading advertisements from a remote server system to a client device on which the software product is installed, and an instrumentation and auditing module having various novel features/functions, e. g., maintaining a rotating log of adrelated statistics and/or performing random and/or statistically-based ad effectiveness audits with user permission. The present invention also encompasses a system (and method) for automatically distributing advertisements to a multiplicity of client devices that have this software product installed thereon, wherein the system includes a centralized control facility for obtaining ad-related statistical information from selected client devices, in a random or statistical manner, e. g., for the purpose of monitoring the integrity and/or effectiveness of the advertisement distribution system. Moreover, the present invention encompasses a software product, e. g., email software, that incorporates an automatic advertisement download function for automatically downloading advertisements from a remote server system to a client device on which the software product is installed, and a"link history" function which enables the user to review previously-viewed advertisements, e. g., by providing a graphical user interface (GUI) which includes a link history window that lists links the user has previously visited and ads that have been previously displayed to the user, along with some status information on each. Preferably, a mechanism will be provided to enable the user to select an ad listed in the link history window for display, e. g., by single-clicking the appropriate ad link, and to enable the user to visit the source Web site of any given ad listed in the link history window, e. g., by double-clicking the appropriate ad link. The present invention also encompasses a system (and method) for automatically distributing advertisements to a multiplicity of client devices that have this software product installed thereon. Furthermore, the present invention encompasses a software product, e. g., e-mail software, which incorporates a"Nag"function that monitors"time since last Nag"and that"nags"the user according to a"Nag Schedule"which is dynamically varied based on the monitored"time since last Nag"information. Finally, the present invention encompasses a software product, e. g., email software, that incorporates a download function that downloads separate file portions representing a single image during separate communication sessions with a remote server (e. g., separate file portions of an advertisement file, e. g., a GIF file). The present invention further encompasses a system (and method) for automatically distributing advertisements to a multiplicity of client devices that have this software product installed thereon. Although presently preferred embodiments of the present invention have been described in detail hereinabove, it should be clearly understood that many variations and/or modifications of the basic inventive concepts herein taught, which may appear to those skilled in the pertinent art, will still fall within the spirit and scope of the present invention, as defined in the appended claims.What is claimed is: |
A method and apparatus of receiving one or more keys in a wireless communication system is described, the method comprises scheduling at a core network (105) a connection for transmitting non-key data to a client device (122, 116), determining in the client device (122, 116) whether to request said one or more keys, requesting delivery of said one or more keys from the core network (105) and receiving in the client device (122, 116) said one or more keys from the core network (105) during the previously scheduled connection for transmitting non-key data. Also the corresponding method and apparatus of the transmitter side are described. |
A method of receiving one or more keys in a wireless communication system, the method comprises:scheduling at a core network (105) a connection for transmitting non-key data to a client device (122, 116);determining in the client device (122, 116) whether to request said one or more keys;requesting delivery of said one or more keys from the core network (105);andreceiving in the client device (122, 116) said one or more keys from the core network (105) during the previously scheduled connection for transmitting non-key data.The method of claim 1, wherein determining comprises receiving a notification based on a first criteria.The method of claim 2, wherein said first criteria comprises evaluating if at least one or more previously received keys will be expiring.The method of claim 2, wherein said first criteria comprises determining if said one or more keys were not received.The method of claim 1, wherein receiving in the client device (122, 116) said non-key data comprises receiving a set of system parameters and/or set of multi-media data.A method of delivering one or more keys in a wireless communication system, the method comprises:scheduling at a core network (105) a connection for transmitting non-key data to a client device (122, 116);receiving a request to transmit said one or more keys from the client device (122, 116);generating said one or more keys to be sent to the client device (122, 116);andtransmitting said one or more keys to the client device (122, 116) during the previously scheduled connection for transmitting non-key data.The method of claim 6, further comprises sending a notification indicating that one or more keys are available for transmission.The method of claim 6, wherein generating said one or more keys comprises accessing a license key server (110) and/or a digital rights management server (106) and/or a distribution center (111).The method of claim 6, further comprises determining a schedule for said connection.The method of claim 6, further comprises determining if said one or more keys have expired before generating said one or more keys.The method of claim 6, further comprises attaching said one or more keys to a non-key data message containing a set of system parameters and/or a set of multi-media data.The method of claim 6, further comprises using one or more key epochs (200) to schedule said connection for transmitting non-key data.A machine-readable storage medium comprising instructions which, when executed by a machine, cause the machine to perform a method according to any of claims 1 to 5 or claims 6 to 12.An apparatus, operable in a wireless communication system that has a scheduled connection for transmitting non-key data to the apparatus and wherein the apparatus is configured to receive one or more keys, the apparatus comprising:means for determining if said one or more keys needs to be requested and requesting delivery of said one or more keys;means for receiving said one or more keys during the scheduled connection for transmitting non-key data.An apparatus for delivering one or more keys, operable in a wireless communication system, the apparatus comprising:means for scheduling a connection for transmitting non-key data to a client device (122, 116);means for receiving a request from said client device (122, 116) to transmit said one or more keys;means for generating said one or more keys to be sent to a client device (122, 116); andmeans for transmitting said generated one or more keys to said client device (122, 116) during the previously scheduled connection for transmitting non-key data. |
Claim of Priority under 35 U.S.C §119 The present Application for Patent claims priority to U.S Provisional Application No. 60/588,203 , entitled, "A Method And Apparatus For Delivering Keys," filed on July 14, 2004, and assigned to the assignee hereof and expressly incorporated by reference herein. FIELD OF THE INVENTION This invention relates to the field of content delivery system, and in particular to the delivery and reception of keys between a server and one or more client terminals. BACKGROUND OF THE INVENTION In a content delivery system, operating in a wireless communication that utilizes a portable client and fixed server for delivery of audio and video content, protection and management of digital rights are a concern. Various methods are employed to efficiently deliver content such that digital rights of the content owner are protected. One conventional way to manage and protect the digital rights is to encrypt the content at the server using one or more keys (encryption or decryption) and providing the keys to an authorized user, using a client device, who subscribes to the content. Thus, the users with keys may decrypt the content to view and/or listen to subscribed content.Generally, content is encrypted using keys (one or more bits) and the keys are delivered on a regular basis. The keys that decrypt the content are delivered periodically. In order to provide the content and keys that decrypt the content at client devices, various over the air techniques are used. Often, a key-exchange method is employed to provide a set of keys (for example public keys and private keys) between the server and the client.A common method is for the client to request a new key(s) or server sending new key(s) to the client. According to this method, a new connection is made between client and server in order to transmit the keys to client. Depending on time of day, there could be several client devices on the system, each requesting a new set of keys. Responsive to the key requests, a new connection must be opened to exchange the keys. Most systems today require that new keys be used each time a new content is provided to the user, since using new keys provide a greater security. Also, each time one or more keys expire or each time a subscription is renewed, a new connection is created to update old or expired keys. This is a burden on system, considering that there are thousands of devices that may request access to new content or renew subscription. Opening and closing a connection ties up resources for the server, especially during peak hours. It would be useful to exchange keys without having to create a new connection and get the necessary keys to the client.Therefore, an efficient delivery system is needed to deliver keys without placing an extra burden on the content delivery system. SUMMARY OF THE INVENTION An embodiment of invention provides an apparatus and an associated method, for an electronic device, that manages delivery of one or more keys to another electronic device without creating unnecessary connections between the server and the mobile terminal.The exemplary electronic device, such as a server, may be operated in a communication system (for example CDMA, TDMA, GSM, OFMD, OFDMA etc.). The electronic device comprises a method for receiving a request from another electronic device to transmit keys, for example a mobile terminal. Responsive to the request, the server generates the required keys and determines a best time to send the keys to the mobile client device such that a new connection is not required just for sending the requested keys. The best time to send the keys may be during a scheduled connection for sending non-key data, for example a connection setup for transmitting system parameters, content data, etc. By sending the keys along with the non-key data, a need for a special connection is avoided.The embodiment also encompasses an electronic device, such as mobile terminal, which may be operated in a communication system (for example CDMA, TDMA, GSM, OFDM, OFDMA etc.). The mobile terminal comprises a method for requesting one or more keys from a server and receiving the requested keys during a connection setup by the server for transmitting non-key data. By receiving the keys along with non-key data, need for a special connection is avoided.The embodiment also encompasses an electronic device, such as mobile device, a server computer, portable computer, etc. operable in a communication system. The server computer comprises a method for receiving a request for one or more keys from an external electronic device and transmitting the requested keys during a connection setup for transmitting non-key data. By transmitting the keys along with non-key data, a need for a special connection is avoided.A more complete appreciation of all the embodiments and scope can be obtained from the accompanying drawings, the following detailed description of the invention, and the appended claims. BRIEF DESCRIPTION OF THE DRAWINGS Figure 1 illustrates a block diagram of electronic devices operating in wireless content delivery system according to an embodiment of the invention;Figure 2A illustrates an example of operation of content delivery system using key epochs;Figure 2B illustrates an example of operation of content delivery system during a key epoch;Figure 3 illustrates an example of flowchart of tasks performed by processor of the server device according to an embodiment the invention; andFigure 4 illustrates an example of flowchart of tasks performed by processor of the mobile client device according to an embodiment the invention. DETAILED DESCRIPTION OF THE INVENTION Figure 1 illustrates a communication system and block diagram of electronic devices operating within the system, such as a server device 100, mobile client terminal 122 ("Client") and content provider 120. The server device 100 comprises a controller (which may also be known as processor 102 and may control the operations of the server device 100) coupled to memory 103 and various servers (collectively known as core network 105). Each of the various servers may comprise a processor, receiver, transmitter and various memories and may also be coupled to processor 102, memory 103, transmitter 104 and receiver 107. Generally, the server device 100 receives content from the content provider 120 and the server device communicates with one or more mobile client terminals 122. The server device 100 may be a stand alone computer or a plurality of computers connected each other to form server device 100. The server device 100 may also be a mobile terminal wirelessly connected to the one or more computers and one or more mobile client terminal 122.The core network 105 includes a distribution center 111 comprising a content server 112 which receives content for the system directly from the content provider 120, a distribution server 114 distributes the content to one or more mobile client devices, and a program content database 118 stores content received from the content provider 120. The core network may comprises a digital rights management server (DRM) 106 that generates one or more keys (for example general keys for decryption, program keys, session keys, encrypted program keys, service keys and service licenses) and manages the secure storage of those keys. Subscription server 108 performs subscription management for the system and communicates to mobile client device. A License Key server (LKS) 110 services key requests from the subscribed mobile client terminals 122, and an Overhead Notification server (ONS) 116 is responsible for collecting notifications of state changes which are timely transmitted to the mobile client terminal 122. The transmitter 104 and the receiver 107 are coupled to the processor 102. It should be noted that transmitter 104 and receiver 107 may be connected wirelessly or by hard wire to external devices for transmitting and receiving information, respectively.The example of mobile client terminal 122 comprises a controller (which may also be referenced to as processor 126) coupled to memory 127. The mobile client terminal 122 communicates with the server device 100 to receive content and keys to decrypt the content.FIG. 2A and 2B illustrates an example of operation of content delivery system using key epochs 200 and various messages that are sent between the sever device 100 and the mobile device 122. An epoch is a unique period of time wherein a specific key is valid. Generally, a time period, for example a 24-hour period, is divided in to one or more key epochs. The length of the each key epoch that may be predetermined by the system provider or dynamically configurable. FIG. 2A shows an example of key epoch T1 202 and key epoch T2 204, and an example of operation of the content delivery system during the epochs. In a typical content delivery system, the server device 100 is not always connected to the mobile client terminal 122. Based on the service provider, the server device 100 only periodically communicates with one or more mobile client terminals to reduce congestion in the system. Service providers generally do not have enough bandwidth to communicate with all the mobile client terminals on an "Always-On" basis. Therefore, most service providers schedule a time to open a connection, for example a connection window for sending and receiving data between mobile client terminal 122 and server device 100.As shown in FIG 2A , key epoch T1 202 further comprises a delta portion 206, which is T2 - X, wherein X represents a time change (delta) before the start of T2 204 epoch. Preferably, during this time, the system attempts to create a connection between the server device 100 and one or more of mobile client terminal 122 in the system. The connection (also known as a communication link) is made during one or more scheduled windows, for example 208, 210 and 212, The communication link may be made using various known wireless communication techniques, for example CDMA 1X-EVDO system or Orthogonal Frequency Divisional Multiple Access (OFDMA) system, that allow transmission and reception of multi-media data. The operators of the system may decide the value of X, which will affect the size of the delta 206 and number of windows that can be created during this time. Also, depending on the operator and the type of information that is need to be communicated, a window of timemay communicated with one mobile client terminal 122 or several mobile client terminals at the same time. Therefore, several hundred windows may be used to ensure that necessary or requested information is sent to the mobile client terminals.As shown in FIG. 2B , prior to the start of a scheduled window, for example 208, an epoch event 214 is created. Thereafter, a communication link between the server device 100 and the mobile client terminal 122 is created. Information for the start of the window period and setting up the communication link is sent to the mobile client terminal 122. Message 216 is used to communicate information, such as system parameters that set up the communication link. The sever device 100 schedules a time when message 216 is sent to the mobile client device 122. Typically, message 216 contains only non-key data; however, it is contemplated that message 216 may also contain key data. Non-key data is any data other than encryption or decryption keys (example portion of decryption key, program key, session key, etc). For example, non-key data may be data representing content, program application, or executalbe code. The non-key data may also comprise set ofsystem parameters comprising a data rate information of the communication link, server device states, status of keys and program content. Responsive to receiving message 216, the mobile client terminal 122 may send a message 218 to request for new content or/and keys for future use. If the mobile client terminal 122 has requested new keys, then the server device 100 will generate new keys by communicating with the digital rights management (DMR) server device 100 using 220 and 222 messages. Thereafter, message 224 is generated and transmitted to mobile client device 122. Generally, message 224 is a scheduled for transmitting non-key data only, unless the requested keys are ready to be transmitted during this window.According to an embodiment, the server device 100 may take advantage of the transmission of the messages 216 or 224 to send one or more keys. Generally, message 216 and 214 are not intended to send any keys, only system parameters. If the server device 100 has any keys to transmit to the mobile client terminal 122, then the server device 100 may attach the keys during these transmissions to avoid having to create a new connection for transmitting the keys. Also, if determined through message 218 that the mobile client terminal 122 needs new keys then the server device 100 may take advantage of sending keys along with non-key data message 224 to transmit keys to the mobile client terminal 122, without creating a special connection.Figure 3 illustrates an example of flowchart 300 of tasks of delivering keys is performed by processor 102 of the server device's 100 according an embodiment the invention. At block 302, data is continually being prepared for transmission. At block 304, the processor processing a scheduled event to send a message to the external device. At block 306, the processor creating a non-key data message to be sent to one or more mobile client terminals; typically this message only contains system parameters to create a connection. At block 308, the processor determining if there are any keys that need to be sent to at least one or more mobile client terminals in the system. If yes, then at block 310, the processor attaching the keys to this message, a scheduled non-key data message, and transmits the message. Otherwise, at block 312, transmiting the non-key data message without any keys. Responsive to the transmitted message of block 310 or 312, at block 314 the processor receiving a response from the mobile client terminal 122. Generally, this response establishes a connection between the server device 100 and mobile client terminal 122. At block 316, the processor determining if the mobile client terminal received the keys sent at block 310, or determining if the mobile client terminal is requesting new keys. At block 318, the processor processing the response received from the mobile client terminal 122 to determine if the transmitted keys were received by the mobile client terminal 122. If yes, then at block 320, processor determining if the mobile client terminal 122 has requested new keys. If yes, then at block 322, the processor generating the necessary keys and at block 324, attaching the keys to the next scheduled message communicated to the mobile client terminal 122. Otherwise, processor returns to block 302. If determined that at block 318 that the keys were not received, then at block 324, attaching the keys to the next scheduled message communicated to the mobile client terminal 122.Figure 4 illustrates an example of flowchart 400 of tasks performed by the mobile client terminal's 122 processor 126 for receiving keys according to an embodiment of the invention. At block 402, the mobile client terminal 122 is an idle state of the content delivery system wherein, the server device 100 and the mobile client terminal 102 may have a communication link or may have communication link without data connection. The mobile client terminal 122 is waiting for communication from the server device 100. At block 404, the processor processing a receivedmessage, generally from the server device 100. This message may or may not contain any keys. At block 406, the processor determining if the received message contained any keys. If yes, then at block 408, the processor 126 extracting one or more keys, processing the extracted keys and then storing the keys in a database. Thereafter at block 408, receiving one or more keys during a connection scheduled for receiving non-key data. At block 410, the processor 126 determining if the mobile client terminal 122 will need new keys. If so, then at block 412, the processor 126 will notifying the server device 100 or requests the new keys. Thereafter, the processor waits at block 402 for another message from the server device 100 to extract the keys.As examples, the method and apparatus may also be implemented in mobile electronic devices such as a mobile telephone, PDA, mobile computers, mobile server computer and other devices having a wireless connection system and receiving audio and video data. Also, as example, the content may by block of video and audio data such as full television program or segment of one or more program.While the invention has been particularly shown and described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the invention as defined by the appended claims. That is, other modifications and variations to the invention will be apparent to those skilled in the art from the foregoing disclosure and teachings. Thus, while only certain embodiments of the invention have been specifically described herein, it will be apparent that numerous modifications may be made thereto without departing from the scope of the invention.In the following further embodiments are described to facilitate the understanding of the invention.In a first further embodiment, a method of receiving one or more keys is described, the method comprising acts of determining whether to request said one or more keys, requesting delivery of said one or more keys and receiving said one or more keys during a connection scheduled for receiving a non-key data. Further, said act of receiving said one or more keys may comprise an act of receiving one or more decryption keys used for decrypting previously stored content. Also, said act of determining may comprise an act of receiving a notification based on a first criteria, wherein said first criteria may comprise an act of evaluating if at least one or more previously received keys will be expiring. Also, said first criteria may comprise determining if said one or more keys were not received. Further, receiving said non-key data may comprise receiving a set of system parameters. Also, receiving said non-key data may comprise receiving a set of multi-media data. Receiving said non-key data may also comprise receiving a set of system parameters and a set of multimedia data. The method may also further comprise an act of storing said received one or more keys into a database. The act of receiving may also comprise receiving said one or more keys from a core network, said core network may comprise a digital rights management server and a license key server.In another further embodiment, an apparatus is described, which is operable in a mobile communication system and configured to receive one or more keys, the apparatus comprising a processor configured to determine if said one or more keys needs to be requested and requesting delivery of said one or more keys, said processor further configured to receive said one or more keys during a connection scheduled for receiving a non-key data. Said one or more keys may comprise one key for decrypting previously stored content. Also, said processor may further be configured to receive a notification based on a first criteria, wherein said first criteria may comprise a value of key expiration. Also, said first criteria may comprise a value generated, after determining if one or more keys were not received. Said non-key data may also comprise a set of multi-media data. Said non-key data may also comprise a set of system parameters. Further, said non-key data may comprise a set of system parameters and a set of multi-media data. Also, said processor may further store said received said one or more keys into a database. Said processor may also be configured to receive said one or more keys, wherein said at least one of said keys may comprise a license key.In yet another further embodiment, an apparatus for delivering one or more keys is described, which is operable in a wireless communication system, the apparatus comprising a processor configured to generate said one or more keys to be sent to a mobile device and transmit said generated keys to said mobile device during a connection scheduled for transmitting non-key data. Said processor may further be configured to determine, prior to transmitting, if said mobile device requires at least one key. Also, said processor may further be configured to send a notification to said mobile device indicating that one or more keys are available for transmission. Said processor may also be configured to access a license key server to generate said keys. Said processor may further be configured to access a license key server, a digital rights management server and a distribution center, to generate said keys. Also, said processor may further be configured to receive a request to transmit keys from said mobile device, prior to generating said keys. Also, said processor may be configured to determine a schedule for said connection. Said processor may also be configured to determine if said one or more keys have expired before generating said keys. Said processor may further be configured to attach said keys to a non-key data message containing system parameters. Also, said processor may be configured to use key epochs to schedule said connection for transmitting non-key data.In yet another further embodiment, a method of delivering one or more keys in a wireless communication system is described, the method comprising acts of generating said keys to be sent to a mobile device and transmitting said keys to said mobile device during a connection, scheduled for transmitting non-key data. Further, the method may comprise an act of determining, prior to transmitting, if it is required to transmit at least one key. Further, the method may comprise an act of sending a notification indicating that one or more keys are available for transmission. Further, said act of generating may comprise an act of accessing a license key server. Also, said act of generating may comprise an act of accessing a license key server, a digital rights management server and a distribution center. Also, the method may comprise an act of receiving a request to transmit keys, prior to generating said keys. Further, the method may comprise an act of determining a schedule for said connection. The method may also further comprise an act of determining if said one or more keys have expired before generating said keys. Also, the method may comprise an act of attaching said keys to a non-key data message containing system parameters. And also, the method may comprise an act of using key epochs to schedule said connection for transmitting non-key data.In yet another further embodiment, a method of delivering keys from a first device to a second device is described, the method comprising acts of generating said keys, at the first device, to be sent to the second device and transmitting said keys from the first device to the second device during a connection, scheduled for transmitting non-key data. The method may also comprise acts of determining, at the second device, if one or more keys needs to be requested, requesting the first device to deliver said keys and receiving, at the second device, said keys from the first device during a connection scheduled for receiving a non-key data. Thereby, said first device may comprise a server computer. Also, said first device may comprise a mobile server computer. Further, said second device may comprise a mobile terminal.In another further embodiment, a machine-readable medium is described, which comprises instructions which, when executed by a machine, cause the machine to perform operation including determining whether to request one or more keys, requesting delivery of said keys and receiving said keys during a connection scheduled for receiving a non-key data.In a final further embodiment, a machine-readable medium is described, which comprises instructions which, when executed by a machine, cause the machine to perform operation including generating said keys to be sent to a mobile device and transmitting said keys to said mobile device during a connection, scheduled for transmitting non-key data. |
According to an embodiment of the present disclosure, a method for manufacturing an integrated circuit (IC) device may include mounting an IC chip onto a center support structure of a leadframe, bonding the IC chip to at least some of the plurality of pins, encapsulating the leadframe and bonded IC chip, sawing a step cut into the encapsulated leadframe, plating the exposed portion of the plurality of pins, and cutting the IC package free from the bar. The leadframe may include a plurality of pins extending from the center support structure and a bar connecting the plurality of pins remote from the center support structure. The step cut may be sawn into the encapsulated leadframe along a set of cutting lines using a first saw width without separating the bonded IC package from the bar, thereby exposing at least a portion of the plurality of pins. The IC package may be cut free from the bar by sawing through the encapsulated lead frame at the set of cutting lines using a second saw width less than the first saw width. |
CLAIMS1. A method for manufacturing an integrated circuit (IC) device in a flat no-leads package, the method comprising:mounting an IC chip onto a center support structure of a leadframe, the leadframe including:a plurality of pins extending from the center support structure; and a bar connecting the plurality of pins remote from the center support structure; bonding the IC chip to at least some of the plurality of pins;encapsulating the leadframe and bonded IC chip;sawing a step cut into the encapsulated leadframe along a set of cutting lines using a first saw width without separating the bonded IC package from the bar, thereby exposing at least a portion of the plurality of pins;plating the exposed portion of the plurality of pins; andcutting the IC package free from the bar by sawing through the encapsulated lead frame at the set of cutting lines using a second saw width less than the first saw width.2. A method according to Claim 1, further comprising:performing an isolation cut to isolate individual pins of the IC package without separating the IC package from the lead frame; andperforming a circuit test of the isolated individual pins after the isolation cut.3. A method according to Claims 1 or 2, further comprising:performing an isolation cut to isolate individual pins of the IC package without separating the IC package from the lead frame, wherein the isolation cut is performed with a third saw width less than the first saw width; andperforming a circuit test of the isolated individual pins after the isolation cut.4. A method according to Claim 3, further comprising bonding the IC chip to at least some of the plurality of pins using wire bonding.5. A method according to any one of the preceding Claims, wherein the first saw width is approximately 0.40 mm.6. A method according to any one of the preceding Claims, wherein the second saw width is approximately 0.30 mm.7. A method according to any one of the preceding Claims, wherein the third saw width is approximately between 0.24 mm and 0.30 mm.8. A method according to any one of the preceding Claims, wherein the step cut is approximately 0.1 mm to 0.15 mm deep and the leadframe has a thickness of approximately 0.20 mm.9. A method for installing an integrated circuit (IC) device in a flat no-leads package onto a printed circuit board (PCB), the method comprising:mounting an IC chip onto a center support structure of a leadframe, the leadframe including:a plurality of pins extending from the center support structure; and a bar connecting the plurality of pins remote from the center support structure; bonding the IC chip to at least some of the plurality of pins;encapsulating the leadframe and bonded IC chip;sawing a step cut into the encapsulated leadframe along a set of cutting lines using a first saw width without separating the bonded IC package from the bar, thereby exposing at least a portion of the plurality of pins;plating the exposed portion of the plurality of pins;cutting the IC package free from the bar by sawing through the encapsulated lead frame at the set of cutting lines using a second saw width less than the first saw width; andattaching the flat no-leads IC package to the PCB using a reflow soldering method to join the plurality of pins of the IC package to respective contact points on the PCB.10. A method according to Claim 9, further comprising:performing an isolation cut to isolate individual pins of the IC package without separating the IC package from the bar; andperforming a circuit test of the isolated individual pins after the isolation cut.11. A method according to Claims 9 or 10, further comprising:performing an isolation cut to isolate individual pins of the IC package without separating the IC package from the bar, wherein the isolation cut is performed with a third saw width less than the first saw width; andperforming a circuit test of the isolated individual pins after the isolation cut.12. A method according to Claim 11, further comprising bonding the IC chip to at least some of the plurality of pins using wire bonding.13. A method according to any one of the preceding Claims 9-12, wherein the first saw width is approximately 0.40 mm.14. A method according to any one of the preceding Claims 9-13, wherein the second saw width is approximately 0.30 mm.15. A method according to any one of the preceding Claims 9-14, wherein the third saw width is approximately between 0.24 mm and 0.30 mm.16. A method according to any one of the preceding Claims 9-15, wherein the step cut is approximately 0.1 mm to 0.15 mm deep and the leadframe has a thickness of approximately 0.20 mm.17. A method according to any one of the preceding Claims 9-16, wherein the reflow soldering process provides fillet heights of approximately 60% of the exposed surface of the pins.18. An integrated circuit (IC) device in a flat no-leads package comprising:an IC chip mounted onto a center support structure of a leadframe and encapsulated with the leadframe to form an IC package having a bottom face and four sides;a set of pins with faces exposed along a lower edge of the four sides of the IC package; anda step cut into the IC package along a perimeter of the bottom face of the IC package, including the exposed faces of the set of pins;wherein a bottom facing exposed portion of the plurality of pins including the step cut is plated.19. An IC device according to Claim 18, wherein the step cut is approximately 0.10 mm to 0.15 mm deep.20. An IC device according to Claims 18 or 19, wherein the plurality of pins are attached to a printed circuit board with fillet heights of approximately 60%. |
QFN PACKAGE WITH IMPROVED CONTACT PINSRELATED PATENT APPLICATIONThis application claims priority to commonly owned U.S. Provisional Patent Application No. 62/082,338, filed November 20, 2014, which is hereby incorporated by reference herein for all purposes.TECHNICAL FIELDThe present disclosure relates to integrated circuit packaging, in particular to so-called flat no-leads packaging for integrated circuits. BACKGROUNDFlat no-leads packaging refers to a type of integrated circuit (IC) packaging with integrated pins for surface mounting to a printed circuit board (PCB). Flat no-leads may sometimes be called micro leadframes (MLF). Flat no-leads packages, including for example quad-flat no-leads (QFN) and dual-flat no-leads (DFN), provide physical and electrical connection between an encapsulated IC component and an external circuit (e.g., to a printed circuit board (PCB)).In general, the contact pins for a flat no-leads package do not extend beyond the edges of the package. The pins are usually formed by a single leadframe that includes a central support structure for the die of the IC. The leadframe and IC are encapsulated in a housing, typically made of plastic. Each leadframe may be part of a matrix of leadframes that has been molded to encapsulate several individual IC devices. Usually, the matrix is sawed apart to separate the individual IC devices by cutting through any joining members of the leadframe. The sawing or cutting process also exposes the contact pins along the edges of the packages.Once sawn, the bare contact pins may provide bad or no connection for reflow soldering. The exposed face of contact pins may not provide sufficient wettable flanks to provide a reliable connection. Reflow soldering is a preferred method for attaching surface mount components to a PCB, intended to melt the solder and heat the adjoining surfaces without overheating the electrical components, and thereby reducing the risk of damage to the components. SUMMARYHence, a process or method that improves the wettable surface of flat no-leads contact pins for a reflow soldering process to mount the flat no-leads package to an external circuit may provide improved electrical and mechanical performance of an IC in a QFN or other flat no-leads package.According to an embodiment of the present disclosure, a method for manufacturing an integrated circuit (IC) device may include mounting an IC chip onto a center support structure of a leadframe, bonding the IC chip to at least some of the plurality of pins, encapsulating the leadframe and bonded IC chip, sawing a step cut into the encapsulated leadframe, plating the exposed portion of the plurality of pins, and cutting the IC package free from the bar. The leadframe may include a plurality of pins extending from the center support structure and a bar connecting the plurality of pins remote from the center support structure. The step cut may be sawn into the encapsulated leadframe along a set of cutting lines using a first saw width without separating the bonded IC package from the bar, thereby exposing at least a portion of the plurality of pins. The IC package may be cut free from the bar by sawing through the encapsulated lead frame at the set of cutting lines using a second saw width less than the first saw width.According to a further embodiment, a method for installing an integrated circuit (IC) device on a printed circuit board (PCB) may include mounting an IC chip onto a center support structure of a leadframe, bonding the IC chip to at least some of the plurality of pins, encapsulating the leadframe and bonded IC chip, sawing a step cut into the encapsulated leadframe, plating the exposed portion of the plurality of pins, cutting the IC package free from the bar, and attaching the flat no-leads IC package to the PCB. The leadframe may include a plurality of pins extending from the center support structure and a bar connecting the plurality of pins remote from the center support structure. The step cut may be sawn along a set of cutting lines using a first saw width without separating the bonded IC package from the bar, thereby exposing at least a portion of the plurality of pins. The IC package may be cut free from the bar by sawing through the encapsulated lead frame at the set of cutting lines using a second saw width less than the first saw width. The IC package may be attached to the PCB using a reflow soldering method to join the plurality of pins of the IC package to respective contact points on the PCB.According to a further embodiment, an integrated circuit (IC) device in a flat no-lead package may include an IC chip mounted onto a center support structure of a leadframe and encapsulated with the leadframe to form an IC package having a bottom face and four sides. The IC device may include a set of pins with faces exposed along a lower edge of the four sides of the IC package. The IC device may include a step cut into the IC package along a perimeter of the bottom face of the IC package, including the exposed faces of the set of pins. A bottom facing exposed portion of the plurality of pins including the step cut may be plated. BRIEF DESCRIPTION OF THE DRAWINGSFigure 1 is a schematic showing a cross section side view through an embodiment a flat no-leads package mounted on a printed circuit board (PCB) according to the teachings of the present disclosure.Figure 2A is a picture showing part of a typical QFN package in a side view and bottom view. Figure 2B shows an enlarged view of the face of copper contact pins along the edge of QFN package exposed by sawing through an encapsulated leadframe.Figure 3 is a picture showing a typical QFN package after a reflow soldering process failed to provide sufficient mechanical and electrical connections to a PCB.Figures 4A and 4B are pictures showing a partial view of a packaged IC device incorporating teachings of the present disclosure in a flat no-leads package with high wettable flanks for use in reflow soldering.Figure 5 A is a picture of the packaged IC device of Figure 4 after a reflow soldering process provided an improved solder connection; Figure 5B is a drawing showing an enlarged detail of the improved solder connection. Figure 6 is a drawing showing a top view of a leadframe which may be used to practice the teachings of the present disclosure. Figure 7 is a flowchart illustrating an example method for manufacturing an integrated circuit (IC) device in a flat no-leads package incorporating teachings of the present disclosure.Figures 8A-8C are schematic drawings illustrating part of an example method for manufacturing an integrated circuit (IC) device in a flat no-leads package incorporating teachings of the present disclosure.Figures 8D and 8E are pictures of an IC device package after the process step of Figs. 8A-8C has been completed.Figure 9A is a schematic drawing illustrating part of an example method for manufacturing an integrated circuit (IC) device in a flat no-leads package incorporating teachings of the present disclosure.Figures 9B and 9C are pictures of an IC device package after the process step of Fig. 9A has been completed.Figures 10A and 10B are schematic drawings illustrating part of an example method for manufacturing an integrated circuit (IC) device in a flat no-leads package incorporating teachings of the present disclosure.Figure IOC is a picture of an IC device package after the process step of Figs. 10A and 10B have been completed.Figures 11A and 11B are schematic drawings illustrating part of an example method for manufacturing an integrated circuit (IC) device in a flat no-leads package incorporating teachings of the present disclosure.Figure 1 IC is a picture of an IC device package after the process step of Figs. 11A and 1 IB have been completed.DETAILED DESCRIPTIONFigure 1 is a side view showing a cross section view through a flat no-leads package 10 mounted on a printed circuit board (PCB) 12. Package 10 includes contact pins 14a, 14b, die 16, leadframe 18, and encapsulation 20. Die 16 may include any integrated circuit, whether referred to as an IC, a chip, and/or a microchip. Die 16 may include a set of electronic circuits disposed on a substrate of semiconductor material, such as silicon.As shown in Figure 1 , contact pin 14a is the subject of a failed reflow process in which the solder 20a did not stay attached to the exposed face of contact pin 14a; the bare copper face of contact pin 14a created by sawing the package 10 free from a leadframe matrix (shown in more detail in Figure 6 and discussed below) may contribute to such failures. In contrast, contact pin 14b shows an improved soldered connection 20b created by a successful reflow procedure. This improved connection provides both electrical communication and mechanical support. The face of contact pin 14b may have been plated before the reflow procedure (e.g., with tin plating).Figure 2A is a picture showing part of a typical QFN package 10 in a side view and bottom view. Figure 2B shows an enlarged view of the face 24 of copper contact pins 14a along the edge of QFN package 10 exposed by sawing through the encapsulated leadframe 18. As shown in Figure 2 A, the bottom 22 of contact pin 14a is plated (e.g., with tin plating) but the exposed face 24 is bare copper.Figure 3 is a picture of a typical QFN package 10 after a reflow soldering process failed to provide sufficient mechanical and electrical connections to a PCB 12. As shown in Figure 3, bare copper face 24 of contact pins 14a may provide bad or no connection after reflow soldering. The exposed face 24 of contact pins 14a may not provide sufficient wettable flanks to provide a reliable connection.Figures 4A and 4B are pictures showing a partial view of a packaged IC device 30 incorporating the teachings of the present disclosure wherein both the exposed face portion 33 and the bottom surface 34 of the pins 32 have been plated with tin to produce an IC device 30 in a flat no-leads package with high wettable flanks for use in reflow soldering, providing an improved solder connection as shown at contact pin 14b in Figure 1 and demonstrated in the picture of Figure 5. As shown, IC device 30 may comprise a quad- flat no-leads packaging. In other embodiments, IC device 30 may comprise a dual-flat no-leads packaging, or any other packaging (e.g., any micro leadframe (MLT)) in which the leads do not extend much beyond the edges of the packaging and which is configured to surface-mount the IC to a printed circuit board (PCB). Figure 5A is a picture showing packaged IC device 30 with plating on both exposed face portion 33 of the pins 32 and the bottom surface 34 of pins 32, demonstrating the improved connection after a reflow soldering process connecting to a PCB 36. Figure 5B is a drawing showing an enlarged cross-sectional detail of IC device 30 after attachment to PCB 36 using a reflow soldering process. As visible in Figures 5A and 5B, solder 38 is connected to pins 32 along both the bottom surface 34 and the face portion 33.Figure 6 shows a leadframe 40 which may be used to practice the teachings of the present disclosure. As shown, leadframe 40 may include a center support structure 42, a plurality of pins 44 extending from the center support structure, and one or more bars 46 connecting the plurality of pins remote from the center support structure. Leadframe 40 may include a metal structure providing electrical communication through the pins 44 from an IC device (not shown in Figure 6) mounted to center support structure 42 as well as providing mechanical support for the IC device. In some applications, an IC device may be glued to center support structure 42. In some embodiments, the IC device may be referred to as a die. In some embodiments, pads or contact points on the die or IC device may be connected to respective pins by bonding (e.g., wire bonding, ball bonding, wedge bonding, compliant bonding, thermosonic bonding, or any other appropriate bonding technique). In some embodiments, leadframe 40 may be manufactured by etching or stamping. Leadframe 40 may be part of a matrix of leadframes 40a, 40b for use in batch processing. Figure 7 is a flowchart illustrating an example method 50 for manufacturing an integrated circuit (IC) device in a flat no-leads package incorporating teachings of the present disclosure. Method 50 may provide improved connection for mounting the IC device to a PCB.Step 52 may include backgrinding a semiconductor wafer on which an IC device has been produced. Typical semiconductor or IC manufacturing may use wafers approximately 750 μιη thick. This thickness may provide stability against warping during high-temperature processing. In contrast, once the IC device is complete, a thickness of approximately 50 μιη to 75 μιη may be preferred. Backgrinding (also called backlap or wafer thinning) may remove material from the side of the wafer opposite the IC device.Step 54 may include sawing and/or cutting the wafer to separate the IC device from other components formed on the same wafer. Step 56 may include mounting the IC die (or chip) on a center support structure of a leadframe. The IC die may be attached by the center support structure by gluing or any other appropriate method.At Step 58, the IC die may be connected to the individual pins extending from the center support structure of the leadframe. In some embodiments, pads and/or contact points on the die or IC device may be connected to respective pins by bonding (e.g., wire bonding, ball bonding, wedge bonding, compliant bonding, thermosonic bonding, or any other appropriate bonding technique).At Step 60, the IC device and leadframe may be encapsulated to form an assembly. In some embodiments, this includes molding into a plastic case. If a plastic molding is used, a post-molding cure step may follow to harden and/or set the housing.At Step 62, a step cut may be sawn into the encapsulated assembly. The step cut may be made along a set of cutting lines selected to cross at least a set of pins of the leadframe. The step cut may be made using a step cut saw width. In some embodiments, the step cut saw width may be approximately 0.4 mm. In some embodiments, the first step cut may be made approximately 0.1-0.15 mm deep into a leadframe having a thickness of about 0.2 mm. The first step cut does not, therefore, cut all the way through the pins.Figure 8 illustrates a process of one embodiment of a step cut that may be used at Step 62, with Figures 8A-8C including schematics showing a side view of Step 62. As shown in Figure 8 A, pins 44 may be encapsulated in a plastic molding 48. Pins 44 and/or any other leads in leadframe 40 may have a thickness, t. As shown in Figure 8B, the step cut saw width, ws, and depth, d, do not fully separate pins 44 from neighboring packages. Figure 8C shows pins 44 exposed along the bottom surface 44a and step cut 44b. Figures 8D and 8E are isometric views showing pins 44 after Step 62 has been completed. Step 64 may include a chemical de-flashing and a plating process to cover the exposed bottom areas of the connection pins.Figure 9 illustrates the results of one embodiment of a plating process that may be used at Step 64. Figure 9 A is a schematic side view in cross section showing pins 44 encapsulated in plastic molding 48, having a step cut as discussed in relation to Step 62. In addition, plating 45 has been deposited on the exposed surfaces of pins 44, including the bottom surfaces 44a and step cut 44b. Figures 9B and 9C are pictures showing plated pins 44.Step 66 may include performing an isolation cut. The isolation cut may include sawing through the pins of each package to electrically isolate the pins from one another. The isolation cut may be made using a saw width less than the saw width used to make the step cut. In some embodiments, the isolation cut may be made with a blade having a thickness of approximately 0.24 mm.Figure 10 illustrates a process of one embodiment of an isolating cut that may be used at Step 66. Figures 10A and 10B are schematic drawings showing a cross-sectional side view of pins 44 encapsulated in plastic molding 48 and after a step cut and plating of the exposed surfaces. After plating 45 has been deposited in Step 64, an isolation cut of width Wi is made beyond the full thickness t of pins 44 as shown in Figure 10B. w\ is narrower than wsleaving at least a portion of the plated step cut remaining after the isolation cut. In contrast to Step 62, the depth of the isolation cut is larger than the total thickness t of pins 44 so that the individual pins 44 and circuits of leadframe 40 will no longer be in electrical communication through the matrix of leadframes and/or bar 46. Figure IOC is a picture showing pins 44 after Step 66 is complete.Step 68 may include a test and marking of the IC device once the isolation cut has been completed. Method 50 may be changed by altering the order of the various steps, adding steps, and/or eliminating steps. For example, flat no-leads IC packages may be produced according to teachings of the present disclosure without performing an isolation cut and/or testing of the IC device. Persons having ordinary skill in the art will be able to develop alternative methods using these teachings without departing from the scope or intent of this disclosure.Step 70 may include a singulation cut to separate the IC device from the bar, the leadframe, and/or other nearby IC devices in embodiments where leadframe 40 is part of a matrix of leadframes 40. The singulation cut may include sawing through the same cutting lines as the step cut and/or the isolation cut with a saw width less than the step cut saw width. In some embodiments, the singulation saw width may be approximately 0.3 mm. The singulation cut exposes only a portion of the bare copper of the pins of the leadframe. Another portion of the pins remain plated and unaffected by the final sawing step. Figure 11 illustrates a process of one embodiment of a singulation cut that may be used at Step 70. Figures 11A and 1 IB are schematic drawings showing a cross-sectional side view of pins 44 encapsulated in plastic molding 48 and after a step cut, plating of the exposed surfaces, and an isolation cut. After any testing and/or marking in Step 68, a singulation cut of width wf is made through the full package as shown in Figure 1 IB. wf is narrower than wsleaving at least a portion of the plated step cut remaining after the singulation cut. Figure 11C is a picture showing pins 44 after Step 66 is complete.Step 72 may include attaching the separated IC device, in its package, to a PCB or other mounting device. In some embodiments, the IC device may be attached to a PCB using a refiow soldering process. Figure 5B shows a view of the pin area of an IC device that has been mounted on a printed circuit board and attached by a refiow solder process. The half sawn cut or step cut provided by the present disclosure can increase the wettable flanks or fillet height to 60% and meet, for example, automotive customer requirements. Thus, according to various teachings of the present disclosure, the "wettable flanks" of a flat no-leads device may be improved and each solder joint made by a refiow soldering process may provide improved performance and/or increased acceptance rates during visual and/or performance testing.In contrast, a conventional manufacturing process for a flat no-leads integrated circuit package may leave pin connections without sufficient wettable surface for a refiow solder process. Even if the exposed pins are plated before separating the package from the leadframe or matrix, the final sawing step used in a typical process leaves only bare copper on the exposed faces of the pins. |
A system for regulating ON and/or ONO dielectric formation is provided. The system includes one or more light sources, each light source directing light to one or more oxide and/or nitride layers being deposited and/or formed on a wafer. Light reflected from the oxide and/or nitride layers is collected by a measuring system, which processes the collected light. The collected light is indicative of the thickness and/or uniformity of the respective oxide and/or nitride layers on the wafer. The measuring system provides thickness and/or uniformity related data to a processor that determines the thickness and/or uniformity of the respective oxide and/or nitride layers on the wafer. The system also includes a plurality of oxide/nitride formers; each oxide/nitride former corresponding to a respective portion of the wafer and providing for ON and/or ONO formation thereon. The processor selectively controls the oxide/nitride formers to regulate oxide and/or nitride layer formation on the respective ON and/or ONO formations on the wafer. |
What is claimed is: 1. A system for regulating oxide/nitride (ON) dielectric formation in non volatile memory devices, comprising:at least one oxide/nitride former operative to form one or more oxide and/or nitride layers on a portion of a wafer; an oxide/nitride former driving system for driving the at least one oxide/nitride former; a system for directing light to the portion of the wafer; a measuring system for measuring parameters of ON formation thickness and uniformity based on light reflected from one or more ON formations; and a processor operatively coupled to the measuring system and the oxide/nitride former driving system, the processor receiving ON formation thickness and uniformity data from the measuring system and the processor using the data to at least partially base control of the at least one oxide/nitride former so as to continuously regulate ON thickness and uniformity on the portion of the wafer during formation of each layer thereon. 2. The system of claim 1, further operable to regulate oxide/nitride/oxide (ONO) dielectric formation in non-volatile memory devices, wherein:the measuring system is further operable to measure parameters of ONO formation thickness and uniformity based on light reflected from one or more ONO formations; and the processor further operable to receive ONO formation thickness and uniformity data from the measuring system and the processor using the data to at least partially base control of the at least one oxide/nitride former so as to continuously regulate ONO thickness and uniformity on the portion of the wafer. 3. The system of claim 1, the measuring system further including a scatterometry system for processing the light reflected from the one or more ON and/or ONO formations.4. The system of claim 3, the processor being operatively coupled to the scatterometry system, the processor analyzing data relating to thickness and uniformity received from the scatterometry system, and the processor basing control of the at least one oxide/nitride former at least partially on the analyzed data.5. The system of claim 1, the processor mapping the wafer into a plurality of grid blocks, and making a determination of ON and/or ONO formation thickness and uniformity at a grid block.6. The system of claim 1, wherein the processor determines whether thickness and uniformity for at least a portion of the wafer are within an acceptable range.7. The system of claim 6, wherein the processor controls the at least one oxide/nitride former to continuously regulate ON and/or ONO formation on the at least one portion to an acceptable value.8. A system for regulating ON and/or ONO formation, comprising:first sensing means for sensing ON and/or ONO formation thickness of one or more of oxide and/or nitride layers; second sensing means for sensing uniformity of one or more of ON and/or ONO formations; forming means for forming one or more oxide and/or nitride layers; and controlling means for selectively controlling the forming means so as to regulate oxide and/or nitride formation. 9. A method for regulating ON and/or ONO formation, comprising:defining a wafer as a plurality of portions; establishing one or more ON and/or ONO formations to be formed; directing light onto at least one of the ON and/or ONO formations; collecting light reflected from at least one ON and/or ONO formation; analyzing the reflected light to determine thickness and uniformity of the at least one ON and/or ONO formation; and continuously controlling one or more oxide/nitride formers to regulate oxide and/or nitride formation of the at least one ON and/or ONO formation. 10. The method of claim 9, further comprising:employing a scatterometry system to process the reflected light. 11. The method of claim 10, further comprising:using a processor to control the at least one oxide/nitride former based at least partially on data received from the scatterometry system. 12. The method of claim 11, further comprising:using a processor to continuously control the at least one oxide/nitride former based at least partially on data received from the scatterometry system. 13. A method for regulating ON and/or ONO formation, comprising:partitioning a wafer into a plurality of grid blocks; using one or more oxide/nitride formers to form one or more oxide and/or nitride layers on the wafer, each oxide/nitride former functionally corresponding to a respective grid block; determining thickness and uniformity of the one or more ON and/or ONO formations on one or more portions of the wafer, each portion corresponding to a respective grid block; and using a processor to coordinate continuous control of the oxide/nitride formers, respectively, in accordance with determined oxide and/or nitride thickness and uniformity of the respective portions of the wafer. |
TECHNICAL FIELDThe present invention generally relates to semiconductor processing, and in particular to systems and methods for regulating the formation of dielectric layers in non-volatile semiconductor memory devices.BACKGROUND OF THE INVENTIONIn the semiconductor industry, there is a continuing trend toward higher device densities. To achieve these high densities there have been, and continue to be, efforts towards scaling down device dimensions (e.g., at sub micron levels) on semiconductor wafers. In order to accomplish such high device packing densities, smaller and smaller feature sizes and separations between such features are required. This can include the thickness and spacing of dielectric materials, oxide/nitride (ON) and/or oxide/nitride/oxide (ONO) materials, interconnecting lines, spacing and diameter of contact holes, and the surface geometry such as corners and edges of various features.The process of manufacturing semiconductors, or integrated circuits (commonly called ICs, or chips), typically consists of more than a hundred steps, during which hundreds of copies of an integrated circuit can be formed on a single wafer. Generally, the process involves creating several layers on and/or in a substrate that ultimately forms the complete integrated circuit. This layering process creates electrically active regions in and/or on the semiconductor wafer surface. Insulation and conductivity between such electrically active regions can be important to reliable operation of such integrated circuits. One type of integrated circuit in which insulation and conductivity between electrically active regions is important is electronic memory.Electronic memory comes in different forms to serve different purposes. One such electronic memory, FLASH memory, can be employed for information storage in devices including, but not limited to, cellular phones, digital cameras and home video game consoles. FLASH memory can be considered a solid state storage device, in that functionality is achieved electronically rather than mechanically. FLASH memory is a type of EEPROM (Electrically Erasable Programmable Read Only Memory) chip. FLASH memories are a type of non-volatile memory (NVM). NVMs can retain information when power to the NVM is removed which distinguishes NVMs from volatile memories (e.g., DRAM, SRAM) that lose data stored in them when power is removed. FLASH memory is electrically erasable and reprogrammable in-system. The combination of non-volatility and in-system eraseability/reprogrammability make FLASH memory well-suited to a number of end-product applications including, but not limited to, a personal computer BIOS, telecom switches, cellular phones, internetworking devices, instrumentation, automotive devices and consumer-oriented voice, image and data storage devices (e.g., digital cameras, digital voice recorders, PDAs).An exemplary FLASH memory can have a grid of columns and rows with a cell that has two transistors at each intersection of the rows and columns. Thus, referring initially to Prior Art FIG. 1, a cross section of an exemplary FLASH memory cell 100 is illustrated. The exemplary FLASH memory cell 100 illustrated includes a control gate 102 and a floating gate 106 separated by an ON and/or ONO layer 112. The control gate 102 can be referred to as a "poly 2" while the floating gate 106 can be referred to as a "poly 1", and thus the term interpolydielectric can be applied to the ON and/or ONO layer 112. Properties of the ON and/or ONO layer 112 including, but not limited to, thickness and uniformity, are important to facilitating reliable operation of the memory cell. Furthermore, properties of the ON and/or ONO layer 112 including, but not limited to, thickness and uniformity, are important to facilitating reliable interactions between the control gate 102 and the floating gate 106. Properties of the ON and/or ONO layer 112 are thus important to facilitating reliable operation of the FLASH memory cell 100, due to the insulating and/or conducting property of the ON and/or ONO layer 112. For example, properties including, but not limited to the ability to store data, to retain data, to be erased, to be reprogrammed and to operate in desired electrical and temperature ranges can be affected by the thickness and/or uniformity of the ON and/or ONO layer 112. The control gate 102, floating gate 106 and ON and/or ONO layer 112 can be fabricated on a tunnel oxide layer 108. It is to be appreciated that although the ON and/or ONO layer 112 is illustrated as one layer, that such a layer can be formed from multiple layers (e.g., oxide, nitride, oxide (so called ONO)). It is to be further appreciated that although the FLASH memory cell illustrated in Prior Art FIG. 1 employs an interpolydielectric, that the present invention can be applied to the formation of charge trapping dielectrics in SONOS (Silicon Oxide Nitride Oxide Silicon) type memory devices and MONOS (Metal Oxide Nitride Oxide) devices.The requirement of small features with close spacing between adjacent features in FLASH memory devices requires sophisticated manufacturing techniques including control of oxide/nitride layer and/or oxide/nitride/oxide layer formation. Fabricating a FLASH memory device using such sophisticated techniques may involve a series of steps including the formation of layers/structures by chemical vapor deposition (CVD) and oxide growth. Conventionally, difficulties in forming, with precise thickness and/or uniformity, an oxide layer over a nitride layer or a polysilicon, have limited the effectiveness and/or properties of FLASH memory devices manufactured by conventional techniques. Similarly, difficulties in forming, with precise thickness and/or uniformity, a nitride layer over an oxide layer have likewise limited the effectiveness of FLASH memory devices manufactured by conventional techniques.Due to the extremely fine structures that are fabricated on a FLASH memory device, controlling the formation of oxide and/or nitride materials used to separate components (e.g., control gate, floating gate) on a wafer from other components are significant factors in achieving desired critical dimensions and operating properties and thus in manufacturing a reliable FLASH memory device. The more precisely the oxide and/or nitride can be formed the more precisely that critical dimensions may be achieved, with a corresponding increase in FLASH memory reliability. Conventionally, due to non-uniform oxide and/or nitride formation and inaccurate oxide and/or nitride formation monitoring techniques, a thickness of oxide and/or nitride greater or lesser than the thickness desired may be formed.SUMMARY OF THE INVENTIONThe following presents a simplified summary of the invention in order to provide a basic understanding of some aspects of the invention. This summary is not an extensive overview of the invention. It is not intended to identify key/critical elements of the invention or to delineate the scope of the invention. Its sole purpose is to present some concepts of the invention in a simplified form as a prelude to the more detailed description that is presented later.The present invention provides for a system that facilitates monitoring and controlling ON and/or ONO dielectric formation. An exemplary system may employ one or more light sources arranged to project light on one or more oxide and/or nitride layers on a wafer and one or more light sensing devices (e.g., photo detector, photo diode) for detecting light reflected by the one or more oxide and/or nitride layers. The light, reflected from one or more oxide and/or nitride layers is indicative of at least the oxide and/or nitride thickness, which may vary during the oxide and/or nitride formation process.One or more oxide/nitride formers can be arranged to correspond to a particular wafer portion. Each oxide/nitride former may be responsible for forming an oxide and/or nitride portion of an ON and/or ONO formation on one or more particular wafer portions. The oxide/nitride formers are selectively driven by the system to form an oxide and/or nitride portion of the ON and/or ONO dielectric at a desired thickness and/or desired uniformity. The progress of the oxide and/or nitride formation is monitored by the system by comparing the thickness and/or uniformity of the oxide and/or nitride portions of the ON and/or ONO dielectric on the wafer to a desired thickness and/or uniformity. Different wafers and even different components within a wafer may benefit from varying oxide and/or nitride thickness and/or uniformity. By monitoring the oxide and/or nitride thickness and/or uniformity at the one or more wafer portions, the present invention enables selective control of oxide and/or nitride formation. As a result, more optimal ON and/or ONO dielectric formation is achieved, which in turn improves FLASH memory manufacturing.One particular aspect of the invention relates to a system for regulating oxide and/or nitride formation. At least one oxide/nitride former forms an oxide and/or nitride portion of the ON and/or ONO dielectric on a portion of a wafer, and an oxide and/or nitride former driver system drives the at least one oxide/nitride former. A system for directing light directs light to one or more oxide and/or nitride layers being formed on the wafer, and a measuring system measures parameters of the one or more oxide and/or nitride layers based on light reflected by the layers. A processor is operatively coupled to the measuring system and the oxide and/or nitride former driving system, the processor receives oxide and/or nitride formation parameter data from the measuring system and the processor uses the data to at least partially base control of the at least one oxide/nitride former so as to regulate oxide and/or nitride formation of the at least one portion of the wafer where oxide and/or nitride is being formed.Yet another aspect of the present invention relates to a method for regulating oxide and/or nitride formation. The method includes defining a wafer as a plurality of portions; forming one or more oxide and/or nitride layers on a wafer, directing light onto at least one of the oxide and/or nitride layer; collecting light reflected by the at least one oxide and/or nitride layer; analyzing the reflected light to determine the progress of oxide and/or nitride formation on the wafer; and controlling an oxide/nitride former to regulate the formation of the oxide and/or nitride layer on the at least one portion.Still another aspect of the present invention relates to a method for regulating oxide and/or nitride formation. The method includes: partitioning a wafer into a plurality of grid blocks; forming one or more oxide and/or nitride layers on a wafer using one or more oxide/nitride formers, each oxide/nitride former functionally corresponding to a respective grid block; determining the progress of the oxide and/or nitride formation on portions of the wafer, each portion corresponding to a respective grid block; and using a processor to coordinate control of the oxide/nitride formers, respectively, in accordance with determined oxide and/or nitride thickness and/or uniformity of the respective portions of the wafer.Another aspect of the present invention relates to a system for regulating ON and/or ONO dielectric formation. The system includes: means for sensing oxide and/or nitride thickness and/or uniformity of a plurality of portions of a wafer; means for forming oxide and/or nitride layers on the respective wafer portions; and means for selectively controlling the means for forming oxide and/or nitride layers so as to regulate oxide and/or nitride thickness and/or uniformity on the respective wafer portions.To the accomplishment of the foregoing and related ends, the invention, then, comprises the features hereinafter fully described and particularly pointed out in the claims. The following description and the annexed drawings set forth in detail certain illustrative aspects of the invention. These aspects are indicative, however of but a few of the various ways in which the principles of the invention may be employed. Other objects, advantages and novel features of the invention will become apparent from the following detailed description of the invention when considered in conjunction with the drawings.BRIEF DESCRIPTION OF THE DRAWINGSPrior Art FIG. 1 is a cross section of an exemplary FLASH memory cell.FIG. 2 is a partial schematic block diagram of an ON and/or ONO dielectric formation monitoring system in accordance with the present invention.FIG. 3 is schematic block diagram of an ON and/or ONO dielectric formation monitoring system in accordance with the present invention.FIG. 4 is a partial schematic block diagram of the system of FIG. 3 being employed in connection with determining the thickness and/or uniformity of oxide and/or nitride layers in accordance with the present invention.FIG. 5 illustrates an ONO dielectric.FIG. 6 is a perspective illustration of a substrate having an oxide and/or nitride layer deposited thereon in accordance with the present invention.FIG. 7 is a representative three-dimensional grid map of oxide and/or nitride layer formations illustrating oxide and/or nitride layer thickness and/or uniformity measurements taken at grid blocks of the grid map in accordance with the present invention.FIG. 8 is an oxide and/or nitride layer thickness and/or uniformity measurement table correlating the oxide and/or nitride thickness and/or uniformity measurements of FIG. 7 with desired values for the thickness and/or uniformity measurements in accordance with the present invention.FIG. 9 is a simplified perspective view of an incident light reflecting off a surface, in accordance with an aspect of the present invention.FIG. 10 is a simplified perspective view of an incident light reflecting off a surface, in accordance with an aspect of the present invention.FIG. 11 illustrates a complex reflected and refracted light produced when an incident light is directed onto a surface, in accordance with an aspect of the present invention.FIG. 12 illustrates a complex reflected and refracted light produced when an incident light is directed onto a surface, in accordance with an aspect of the present invention.FIG. 13 illustrates a complex reflected and refracted light produced when an incident light is directed onto a surface, in accordance with an aspect of the present invention.FIG. 14 illustrates phase and intensity signals recorded from a complex reflected and refracted light produced when an incident light is directed onto a surface, in accordance with an aspect of the present invention.FIG. 15 is an example scatterometry system collecting reflected light in accordance with an aspect of the present invention.FIG. 16 is a flow diagram illustrating one specific methodology for carrying out the present invention.DETAILED DESCRIPTION OF THE INVENTIONThe present invention will now be described with reference to the drawings, wherein like reference numerals are used to refer to like elements throughout. The present invention will be described with reference to a system for controlling oxide and/or nitride formation using one or more oxide/nitride formers and a scatterometry system. It should be understood that the description of these exemplary aspects are merely illustrative and that they should not be taken in a limiting sense.It is to be appreciated that various aspects of the present invention may employ technologies associated with facilitating unconstrained optimization and/or minimization of error costs. Thus, non-linear training systems/methodologies (e.g., back propagation, Bayesian, fuzzy sets, non-linear regression, or other neural networking paradigms including mixture of experts, cerebella model arithmetic computer (CMACS), radial basis functions, directed search networks and function link networks may be employed.Referring now to FIG. 2, a system 10 for controlling ON and/or ONO dielectric formation on a wafer 14 is shown. One or more ON and/or ONO formations may be formed on the wafer 14. It is to be appreciated that an ON and/or ONO formation can include one or more layers including, but not limited to, oxide and nitride layers. The system 10 includes dielectric layer forming components 16 operable to form a dielectric layer on the wafer 14. The system 10 further includes a measurement component 18 operable to measure, in situ, the developing thickness of the oxide and/or nitride layers being formed on the wafer 14 by the dielectric layer forming components 16. The measurement component 18 can direct a light 19 at the wafer 14 and receive light reflected and/or refracted back from the wafer 14. Such reflected and/or refracted light can be analyzed by the measurement component 18, with the results of such analysis passed to a control system 17. The control system 17 can thus be employed to feed forward control information to the dielectric layer forming components 16, facilitating more precise control of the dielectric layer formed on the wafer 14.Referring now to FIG. 3, a system 20 for controlling ON and/or ONO dielectric formation on a wafer 22 is shown. One or more ON and/or ONO formations 24 may be formed on the wafer 22. It is to be appreciated by one skilled in the art that an ON and/or ONO formation 24 can include one or more layers including, but not limited to, oxide and nitride layers. It is to be further appreciated that such oxide and nitride layers can be formed employing techniques including, but not limited to chemical vapor deposition and oxide growth.The system 20 further includes one or more oxide/nitride formers 42 that are selectively controlled by the system 20 so as to facilitate controlled formation of oxide and/or nitride layers on the wafer 22. One or more light sources 44 project light onto respective portions of the wafer 22. A portion may have one or more ON and/or ONO formations 24 being formed on that portion. Light reflected by the one or more ON and/or ONO formations 24 is collected by one or more light collecting devices 40 and is processed by an ON and/or ONO formation parameter measuring system 50 to measure at least one parameter relating to the thickness and/or uniformity of the one or more ON and/or ONO formations 24. The reflected light is processed with respect to the incident light in measuring the various parameters.The measuring system 50 includes a scatterometry system 50a. It is to be appreciated that any suitable scatterometry system may be employed to carry out the present invention and such systems are intended to fall within the scope of the claims. A source of light 62 such as a laser, for example, provides light to the one or more light sources 44 via the measuring system 50. Preferably, the light source 62 is a frequency-stabilized laser, however, it will be appreciated that any laser or other light source (e.g., laser diode or helium neon (HeNe) gas laser) suitable for carrying out the present invention can be employed.A processor 60 receives the measured data from the measuring system 50 and determines the thickness and/or uniformity of respective ON and/or ONO formations 24 on the portions of the wafer 22. The processor 60 is operatively coupled to system 50 and is programmed to control and operate the various components within the oxide and/or nitride monitoring and controlling system 20 in order to carry out the various functions described herein. The processor, or CPU 60, may be any of a plurality of processors. The manner in which the processor 60 can be programmed to carry out the functions relating to the present invention will be readily apparent to those having ordinary skill in the art based on the description provided herein. A memory 70, which is operatively coupled to the processor 60, is also included in the system 20 and serves to store program code executed by the processor 60 for carrying out operating functions of the system 20 as described herein. The memory 70 also serves as a storage medium for temporarily storing information such as oxide and/or nitride layer thickness, oxide and/or nitride layer thickness tables, oxide and/or nitride layer uniformity, oxide and/or nitride layer tables, wafer coordinate tables, scatterometry information, and other data that may be employed in carrying out the present invention. A power supply 78 provides operating power to the system 20. Any suitable power supply (e.g., battery, line power) may be employed to carry out the present invention.The processor 60 is also coupled to an oxide/nitride former driving system 80 that drives the oxide/nitride formers 42. The oxide/nitride former driving system 80 is controlled by the processor 60 so as to selectively vary output of the respective oxide/nitride formers 42 and thus facilitates providing more precise control of the thickness of the oxide and/or nitride layers. Each respective portion of the wafer 22 may have a corresponding oxide/nitride former 42 associated therewith. The processor 60 is able to monitor the development of the various ON and/or ONO formations 24 and selectively regulate the thickness and/or uniformity of each portion via the corresponding oxide/nitride formers 42. As a result, the system 20 provides for regulating ON and/or ONO formation 24 thickness and/or uniformity on the wafer 22, which in turn improves, for example, reliability of FLASH memory devices manufactured employing the present invention. Although a processing chamber is not shown, the wafer 22, the ON and/or ONO formations 24, the chuck 30, the light sources 44, the light collecting devices 40 and the oxide/nitride formers 42 may be positioned within a processing chamber wherein certain parameters (e.g. temperature, pressure, atmosphere composition and the like) can be controlled.FIG. 4 illustrates the system 20 being employed to measure the thickness and/or uniformity of ON and/or ONO formations 24 on a wafer 22 at a particular location on the wafer. The light source 44 directs a light 44a incident to the surface of the wafer 22, and the angle of a reflected and/or refracted light 44b from the surface of the wafer22 will vary in accordance with the thickness and/or uniformity of the ON and/or ONO formation 24. The measuring system 50 collects the light 44b and processes the light 44b in accordance with scatterometry techniques to provide the processor 60 with data corresponding to the thickness and/or uniformity of the ON and/or ONO formation 24.FIG. 5 illustrates a dielectric layer 112 formed of three layers, an oxide layer 114, a nitride layer 116, and an oxide layer 118. Precisely controlling the thickness and/or uniformity of each of the three layers 114, 116 and 118 leads to improvements in the reliability of a FLASH memory cell. Thus, the present invention facilitates controlling the thickness and/or uniformity of each of the layers 114, 116, and 118 individually, and/or facilitates controlling the overall thickness and/or uniformity of the ON and/or ONO dielectric layer 112. For example, the present invention can facilitate controlling the thickness and/or uniformity of the oxide layer 118 by collecting scatterometry data associated with the oxide layer 118 during formation. Data collected during the formation of the oxide layer 118 can thus be analyzed and employed to produce information that can be fed back to control the formation process. For example, if the oxide layer 118 is being formed by thermal oxidation, the analyzed scatterometry data can be employed to generate feedback information operable to control the time over which the oxide growth should continue and/or the temperature at which the continued oxide growth should occur. If the oxide layers 118 and 114 and the nitride layer 116 can be formed by a CVD technique, the analyzed scatterometry data can be employed to generate feedback information operable to control deposition temperature, rates of gas flows, pressures, etc. If the top oxide 114 is formed by partial thermal oxidation of the nitride either by dry (O2) or steam (H2O) oxidation, then the analyzed scatterometry data can be employed to generate feedback information operable to control oxidation temperature, O2 and/or H2O gas flow rates, pressures, etc.Turning now to FIGS. 6-8 the chuck 30 is shown in perspective supporting the wafer 22 whereupon one or more ON and/or ONO formations 24 may be formed. The wafer 22 can be divided into a grid pattern as that shown in FIG. 7. Each grid block (XY) of the grid pattern corresponds to a particular portion of the wafer 22 and each grid block may have one or more ON and/or ONO formations 24 associated with that grid block. Each portion is individually monitored for oxide and/or nitride thickness and/or uniformity and each portion is individually controlled for oxide and/or nitride formation.In FIG. 7, each ON and/or ONO formation 24 in each respective portion of the developer (X1Y1 . . . X12, Y12) is being monitored for thickness and/or uniformity using reflective light, the measuring system 50 and the processor 60. The thickness of each ON and/or ONO formation 24 is shown. As can be seen, the thickness at coordinate X7Y6 is substantially higher than the thickness of the other wafer 22 portions XY. It is to be appreciated that although FIG. 7 illustrates the wafer 22 being mapped (partitioned) into 144 grid block portions, the wafer 22 may be mapped with any suitable number of portions and any suitable number of ON and/or ONO formations 24 can be monitored. Although the present invention is described with respect to one oxide/nitride former 42 corresponding to one grid block XY, it is to be appreciated that any suitable number of oxide/nitride formers 42 corresponding to any suitable number of grid blocks may be employed. It is to be further appreciated that although FIG. 7 illustrates measurements for oxide and/or nitride formation thickness measurements for uniformity may also be taken.FIG. 8 is a representative table of thickness measurements (taken for the various grid blocks) that have been correlated with acceptable thickness values for the portions of the wafer 22 mapped by the respective grid blocks. As can be seen, all the grid blocks, except grid block X7Y6, have thickness measurements corresponding to an acceptable thickness value (TA) (e.g., are within an expected range of thickness measurements), while grid block X7Y6 has an undesired thickness value (TU). Thus, the processor 60 has determined that an undesirable thickness condition exists at the portion of the wafer 22 mapped by grid block X7Y6. Accordingly, the processor 60 can drive the oxide/nitride former 427,6, which corresponds to the portion of the wafer 22 mapped at grid block X7Y6, to bring the oxide and/or nitride thickness of this portion of the wafer 22 to an acceptable level. It is to be appreciated that the oxide/nitride formers 42 may be driven so as to maintain, increase and/or decrease the rate of oxide and/or nitride formation of the respective wafer 22 portions as desired. It is to be appreciated that although FIG. 8 illustrates measurements for oxide and/or nitride formation thickness, that measurements for uniformity may also be taken.Scatterometry is a technique for extracting information about a surface upon which an incident light has been directed. Information concerning properties including, but not limited to, dishing, erosion, profile, thickness of thin films and critical dimensions of features present on the surface can be extracted. The information can be extracted by comparing the phase and/or intensity of the light directed onto the surface with phase and/or intensity signals of a complex reflected and/or diffracted light resulting from the incident light reflecting from and/or diffracting through the surface upon which the incident light was directed. The intensity and/or the phase of the reflected and/or diffracted light will change based on properties of the surface upon which the light is directed. Such properties include, but are not limited to, the chemical properties of the surface, the planarity of the surface, features on the surface, voids in the surface, and the number and/or type of layers beneath the surface.Different combinations of the above-mentioned properties will have different effects on the phase and/or intensity of the incident light resulting in substantially unique intensity/phase signatures in the complex reflected and/or diffracted light. Thus, by examining a signal (signature) library of intensity/phase signatures, a determination can be made concerning the properties of the surface. Such substantially unique phase/intensity signatures are produced by light reflected from and/or refracted by different surfaces due, at least in part, to the complex index of refraction of the surface onto which the light is directed. The complex index of refraction (N) can be computed by examining the index of refraction (n) of the surface and an extinction coefficient (k). One such computation of the complex index of refraction can be described by the equation:N=n-jkwhere j is an imaginary number.The signal (signature) library can be constructed from observed intensity/phase signatures and/or signatures generated by modeling and simulation. By way of illustration, when exposed to a first incident light of known intensity, wavelength and phase, a first feature on a wafer can generate a first phase/intensity signature. Similarly, when exposed to the first incident light of known intensity, wavelength and phase, a second feature on a wafer can generate a second phase/intensity signature. For example, a line of a first width may generate a first signature while a line of a second width may generate a second signature. Observed signatures can be combined with simulated and modeled signatures to form the signal (signature) library. Simulation and modeling can be employed to produce signatures against which measured phase/intensity signatures can be matched. In one exemplary aspect of the present invention, simulation, modeling and observed signatures are stored in a signal (signature) library containing over three hundred thousand phase/intensity signatures. Thus, when the phase/intensity signals are received from scatterometry detecting components, the phase/intensity signals can be pattern matched, for example, to the library of signals to determine whether the signals correspond to a stored signature.To illustrate the principles described above, reference is now made to FIGS. 9 through 14. Referring initially to FIG. 9, an incident light 202 is directed at a surface 200, upon which one or more features 206 may exist. In FIG. 9 the incident light 202 is reflected as reflected light 204. The properties of the surface 200, including but not limited to, thickness, uniformity, planarity, chemical composition and the presence of features, can affect the reflected light 204. In FIG. 9, the features 206 are raised upon the surface 200. The phase and intensity of the reflected light 204 can be measured and plotted, as shown, for example, in FIG. 14. The phase 260 of the reflected light 204 can be plotted, as can the intensity 262 of the reflected light 204. Such plots can be employed to compare measured signals with signatures stored in a signature library using techniques like pattern matching, for example. Although the features 206 are illustrated as substantially regular, it is to be appreciated that irregular features can also be measured using scatterometry techniques.Referring now to FIG. 10, an incident light 212 is directed onto a surface 210 upon which one or more depressions 216 appear. The incident light 212 is reflected as reflected light 214. Like the one or more features 206 (FIG. 9) may affect an incident beam, so too may the one or more depressions 216 affect an incident beam. Thus, it is to be appreciated that scatterometry can be employed to measure features appearing on a surface, features appearing in a surface, and properties of a surface itself, regardless of features. It is to be further appreciated that the term "features" includes features intentionally and unintentionally appearing on a surface.Turning now to FIG. 11, complex reflections and refractions of an incident light 240 are illustrated. The reflection and refraction of the incident light 240 can be affected by factors including, but not limited to, the presence of one or more features 228, and the composition of the substrate 220 upon which the features 228 reside. For example, properties of the substrate 220 including, but not limited to the thickness of a layer 222, the chemical properties of the layer 222, the opacity and/or reflectivity of the layer 222, the thickness of a layer 224, the chemical properties of the layer 224, the opacity and/or reflectivity of the layer 224, the thickness of a layer 226, the chemical properties of the layer 226, and the opacity and/or reflectivity of the layer 226 can affect the reflection and/or refraction of the incident light 240. Thus, a complex reflected and/or refracted light 242 may result from the incident light 240 interacting with the features 228, and/or the layers 222, 224 and 226. Although three layers 222, 224 and 226 are illustrated in FIG. 11, it is to be appreciated by one skilled in the art that a dielectric can be formed of a greater or lesser number of such layers.Turning now to FIG. 12, one of the properties from FIG. 11 is illustrated in greater detail. The dielectric 220 can be formed of one or more layers 222, 224 and 226. For example, layer 222 may be an oxide layer, layer 224 may be a nitride layer, and layer 226 may be an oxide layer. The phase 250 of the reflected and/or refracted light 242 can depend, at least in part, on the thickness of a layer, for example, the layer 224. Thus, in FIG. 13, the phase 252 of the reflected light 242 differs from the phase 250 due, at least in part, to the different thickness of the layer 224 in FIG. 13. Although the phase is measured and plotted in association with FIGS. 12 and 13, changes to intensity may also be measured and plotted in accordance with the present invention.Thus, scatterometry is a technique that can be employed to extract information about a surface upon which an incident light has been directed. The information can be extracted by analyzing phase and/or intensity signals of a complex reflected and/or diffracted light. The intensity and/or the phase of the reflected and/or diffracted light will change based on properties of the surface upon which the light is directed, resulting in substantially unique signatures that can be analyzed to determine one or more properties of the surface upon which the incident light was directed.FIG. 15 illustrates an exemplary scatterometry system collecting reflected light. Light from a laser 300 is brought to focus in any suitable well-known manner to form a beam 302. A sample, such as a wafer 304 is placed in the path of the beam 302 and a photo detector or photo multiplier 306 of any suitable well-known construction. Different detector methods may be employed to determine the scattered power. To obtain a grating pitch, the photo detector or photo multiplier 306 may be mounted on a rotation stage 308 of any suitable well-known design. A microprocessor 310, of any suitable well-known design, may be used to process detector readouts, including but not limited to angular locations of different diffracted orders leading to diffraction grating pitches being calculated. Thus, light reflected from the sample 304 may be accurately measured.In view of the exemplary systems shown and described above, a methodology, which may be implemented in accordance with the present invention, will be better appreciated with reference to the flow diagram of FIG. 16. While, for purposes of simplicity of explanation, the methodology of FIG. 16 is shown and described as a series of blocks, it is to be understood and appreciated that the present invention is not limited by the order of the blocks, as some blocks may, in accordance with the present invention, occur in different orders and/or concurrently with other blocks from that shown and described herein. Moreover, not all illustrated blocks may be required to implement a methodology in accordance with the present invention.FIG. 16 is a flow diagram illustrating one particular methodology for carrying out the present invention. At 400, general initializations are performed. Such initializations can include, but are not limited to, allocating memory, establishing pointers, establishing data communications, initializing variables and instantiating objects. At 410, at least a portion of a wafer is partitioned into a plurality of grid blocks "XY". At 415, at least a part of an oxide and/or nitride layer is formed. For example, a first oxide layer can be formed through oxide growth or deposition by CVD. A nitride layer can then be formed on the first oxide layer through nitride deposition (e.g. employing CVD). Then, a second oxide layer can be formed on the nitride layer through oxide deposition (e.g. employing CVD) or partial oxidation of the nitride.At 420, thickness and/or uniformity determinations are made with respect to the various wafer portions mapped by respective grid blocks XY. At 430, a determination is made concerning whether all grid block measurements have been taken. If the determination at 430 is NO, then processing returns to 420. If the determination at 430 is YES, then at 440 the determined thickness and/or uniformity values are compared to acceptable thickness levels for the respective portions of the wafer. At 450, a determination is made concerning whether any unacceptable thickness and/or uniformity values exist. If the determination at 450 is NO, that all thickness and/or uniformity values are acceptable, then at 460 a determination is made concerning whether further formation is required. If the determination at 460 is NO, then processing completes. If the determination at 460 is YES, then processing continues at 415. If the determination at 450 was YES, that unacceptable thickness and/or uniformity values were found, then at 470 the unacceptable thickness and/or uniformity values are analyzed. After the analyses of step 470, feedback information can be generated to control oxide/nitride formers corresponding to grid blocks with unacceptable thickness and/or uniformity values, to regulate characteristics of oxide and/or nitride formation on the respective wafer portions. For example, information concerning the time remaining for oxide and/or nitride formation and/or the temperature at which such oxide and/or nitride formation should occur can be generated. The present iteration is then ended and the process returns to 415 to perform another iteration.What has been described above includes examples of the present invention. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the present invention, but one of ordinary skill in the art may recognize that many further combinations and permutations of the present invention are possible. Accordingly, the present invention is intended to embrace all such alterations, modifications and variations that fall within the spirit and scope of the appended claims. Furthermore, to the extent that the term "includes" is used in either the detailed description or the claims, such term is intended to be inclusive in a manner similar to the term "comprising" as "comprising" is interpreted when employed as a transitional word in a claim. |
The disclosure describes a method for performing a fixed point calculation of a floating point operation (A // B) in a coding device, wherein A // B represents integer division of A divided by B rounded to a nearest integer. The method may comprise selecting an entry from a lookup table (LUT) having entries generated as an inverse function of an index B, wherein B defines a range of values that includes every DC scalar value and every quantization parameter associated with a coding standard, and calculating A // B for coding according to the coding standard based on values A, B1 and B2, wherein B1 and B2 comprise high and low portions of the selected entry of the LUT. The techniques may simplify digital signal processor (DSP) implementations of video coders, and are specifically useful for MPEG-4 coders and possibly others. |
1.An encoding device, including:A memory that stores a lookup table LUT with entries generated as an inverse function of exponent B, each entry is generated according to floor (2 ^ 31 / B) +1, where floor represents an operation for rounding down to an integer , And wherein the index B defines a range of values that includes each DC scalar value and each quantization parameter associated with the coding standard; andA fixed-point calculation unit that performs fixed-point calculation A // B for floating-point operations for encoding based on the values A, B1, and B2 according to the encoding standard, wherein the selected entry of the LUT is divided into components B1 and B2 , Where B1 includes the selected high part of the LUT entry, and B2 includes the selected low part of the LUT entry, and where A // B represents A divided by B Integer division into the nearest integer.2.The encoding device according to claim 1, wherein the encoding device includes a video encoding device, and the encoding standard includes an MPEG-4 video encoding standard.3.The encoding device according to claim 2, further comprising an AC / DC prediction unit that uses the fixed-point calculation to perform DC according to the MPEG-4 video encoding standard when B represents a DC scalar value Predict and use the fixed-point calculation when B represents a quantization parameter to perform AC prediction according to the MPEG-4 video encoding standard.4.The encoding device according to claim 1, wherein B is in the range of [1, 46].5.The encoding device according to claim 1, wherein the fixed-point calculation A // B of the floating-point operation includes a result given by the following formula:(((B1 * A) << 1) + ((B2 * A) >> 15) +32768) >> 16Where << indicates left shift operation,>> indicates right shift operation, and 32768 indicates a constant used to ensure rounding to the nearest integer.6.The encoding device according to claim 1, wherein the fixed-point calculation A // B of the floating-point operation includes a result given by the following formula:((B1 * C) + ((B2 * C) >> 16) +32768) >> 16Where << indicates left shift operation,>> indicates right shift operation, C indicates (2 * A), and 32768 indicates a constant to ensure rounding to the nearest integer.7.The encoding device according to claim 1, further comprising generating the LUT.8.The encoding device according to claim 1, wherein the Q number associated with the LUT is Q31, so that all values of the entry represent decimal values. 9.A method for fixed-point calculation A // B for performing floating-point operations in an encoding device, where A // B represents integer division by A divided by B and rounded to the nearest integer, the method includes:Select entries from a lookup table LUT with entries generated as an inverse function of exponent B, each entry is generated according to floor (2 ^ 31 / B) +1, where floor represents an operation for rounding down to an integer, And among themB defines a range of values that includes each DC scalar value and each quantization parameter associated with the coding standard; and based on the values A, B1, and B2, the A // B used for coding is calculated according to the coding standard, where The predetermined LUT entry is divided into components B1 and B2, wherein B1 includes the selected high part of the LUT entry, and B2 includes the selected low part of the LUT entry.10.The method according to claim 9, wherein the encoding standard includes an MPEG-4 video encoding standard.11.The method of claim 10, further comprising using the fixed-point calculation when B represents a DC scalar value to perform DC prediction according to the MPEG-4 video encoding standard, and using the fixed-point calculation when B represents a quantization parameter AC prediction is performed according to the MPEG-4 video coding standard.12.The method according to claim 9, wherein B is in the range of [1, 46].13.The method according to claim 9, wherein the fixed-point calculation A // B of the floating-point operation includes a result given by the following formula:(((B1 * A) << 1) + ((B2 * A) >> 15) +32768) >> 16Where << indicates left shift operation,>> indicates right shift operation, and 32768 indicates a constant used to ensure rounding to the nearest integer.14.The method according to claim 9, wherein the fixed-point calculation A // B of the floating-point operation includes a result given by the following formula:((B1 * C) + ((B2 * C) >> 16) +32768) >> 16Where << indicates left shift operation,>> indicates right shift operation, C indicates (2 * A), and 32768 indicates a constant to ensure rounding to the nearest integer.15.The method of claim 9, further comprising generating the LUT.16.The method of claim 9, wherein the Q number associated with the LUT is Q31, such that all values of the entry represent decimal values.17.An encoding device, including:A memory that stores a lookup table LUT with entries generated as an inverse function of exponent B, each entry is generated according to floor (2 ^ 31 / B) +1, where floor represents an operation for rounding down to an integer, And wherein the index B defines a range of values that includes each DC scalar value and each quantization parameter associated with the coding standard; and A device for performing fixed-point calculation A // B for floating-point arithmetic for encoding based on the values A, B1 and B2 according to the encoding standard, wherein the selected entry of the LUT is divided into components B1 and B2, Wherein B1 includes the selected high part of the LUT entry, and B2 includes the selected low part of the LUT entry, and A // B represents A divided by B and rounded Integer division to the nearest integer.18.The encoding device according to claim 17, further comprising means for performing DC prediction according to the MPEG-4 video encoding standard using the fixed-point calculation when B represents a DC scalar value, and when B represents a quantization parameter A device that performs AC prediction according to the MPEG-4 video encoding standard using the fixed-point calculation. |
Fixed-point integer division technique for AC / DC prediction in video coding devicesTechnical fieldThe present invention relates to video coding, and more specifically, to AC / DC prediction such as AC / DC prediction used for intra coding in the MPEG-4 standard and other video coding standards.Background techniqueDigital video capabilities can be incorporated into a variety of devices, including digital TVs, digital direct broadcast systems, wireless communication devices, personal digital assistants (PDAs), laptop computers, desktop computers, digital cameras, digital recording devices, Cellular or satellite radiotelephones and similar devices. Digital video devices can provide significant improvements over conventional analog video systems in creating, modifying, transmitting, storing, recording, and playing full-motion video sequences.Many different video coding standards have been established for encoding and decoding digital video sequences. For example, the Moving Picture Experts Group (MPEG) has developed many coding standards, including MPEG-1, MPEG-2, and MPEG-4. Other standards include the International Telecommunication Union (ITU) H.263 standard, QuickTimeTM technology developed by Apple Computer of Cupertino California, Videofor WindowsTM and WindowsTM media developed by Microsoft Corporation, Redmond, Washington, IndeoTM developed by Intel Corporation, and RealNets Inc. of Seattle, RealVideoTM of Washington and CinepakTM developed by SuperMac, Inc. Newer versions of these standards and new standards continue to appear and develop, including the ITU H.264 standard and many proprietary standards. Many image coding standards have also been developed for still image compression, such as the JPEG standard. JPEG stands for "Joint Photographic Experts Group", which is a standardization association.Some coding standards may utilize a technique called "AC / DC prediction". AC / DC prediction is sometimes referred to as "intra prediction" and is usually a prediction process associated with intra coding. For example, AC / DC prediction involves identifying another video block within a given video frame or image to be used in intra-coding to develop a redundancy within a given video frame or image to achieve a data compression prediction process. In other words, intra coding is usually an intra or intra image process that compresses the amount of data required to encode a video frame or image, and AC / DC prediction is to identify which adjacent video block should be used to frame the current video block The process of internal coding.Intra coding can be used alone as a compression technique for still image compression, for example, but it is more commonly implemented with other video coding techniques in video sequence compression. For example, intra-coding can be used in conjunction with inter-coding techniques that utilize the similarity between successive video frames (called temporal or inter-frame correlation). Compared with using inter-frame compression exclusively, when intra-frame coding is used with inter-frame compression, the video sequence can be compressed more.For intra coding, the encoder may utilize a mode selection engine, which selects the desired mode for AC / DC prediction. Most video coding standards allow at least two possible AC / DC prediction modes, including AC prediction mode and DC prediction mode. DC prediction refers to intra video block prediction using only the DC coefficients of the video block (usually the coefficient at the upper left, which may represent the zero frequency value of the video block or the average value of the video block). AC prediction refers to intra video block prediction using some or all AC coefficients of a video block, which are the remaining (non-DC) coefficients of the video block.Summary of the inventionThis disclosure describes techniques implemented by a video encoding device during AC / DC prediction. The technique can be used to allow encoding devices with fixed-point arithmetic functions, such as video encoders implemented in digital signal processors (DSPs), to accurately estimate floating-point arithmetic used in AC / DC prediction. More specifically, the technique involves precise point calculation (A // B) of floating-point operations (A // B) in the encoding device for all possible input parameters of operations that may be encountered in AC / DC prediction, where A // B represents Integer division of A divided by B and rounded to the nearest integer. Half-integer values are rounded away from zero.The described technique may involve generating a look-up table (LUT) with entries as an inverse function of exponent B, where B defines a range of values containing each DC scalar value and each quantization parameter associated with the encoding standard. For example, for the MPEG-4 standard, B may have a range of [1, 46], which covers each DC scalar value and each quantization parameter associated with MPEG-4. In order to accurately estimate the floating-point operation A // B, the selected items of the LUT can be divided into components B1 and B2, which contain the high and low parts of the selected items of the LUT. Fixed-point calculations of floating-point operations (A // B) can include the results given by:(((B1 * A) << 1) + ((B2 * A) >> 15) +32768) >> 16Where << indicates left shift operation,>> indicates right shift operation, and 32768 indicates a constant used to ensure rounding to the nearest integer. The following also identifies other equations that can further reduce the number of processing cycles required to perform calculations in a digital signal processor (DSP).In one embodiment, the present invention describes a method for performing fixed-point calculations (A // B) of floating-point operations in an encoding device, where A // B represents A divided by B and rounded to the nearest integer Integer division. The method includes selecting an entry from a look-up table (LUT) with entries generated as an inverse function of exponent B, where B defines a range of values that includes each DC scalar value and each quantization parameter associated with the encoding standard ; And calculating A // B for encoding based on the encoding standards based on the values A, B1, and B2, where B1 and B2 include the high and low portions of the selected entry of the LUT.In another embodiment, the present invention describes an encoding device that includes a lookup table (LUT) with entries generated as an inverse function of exponent B, where exponent B defines the inclusion of each DC associated with the encoding standard Scalar value and value range of each quantization parameter; and a fixed-point calculation unit that performs fixed-point calculation (A // B) of floating-point operations for encoding based on the values A, B1, and B2 according to the encoding standard, where B1 And B2 include the high and low parts of the selected entry of the LUT, and where A // B represents A divided by B and rounded to the nearest integer integer division.These and other techniques described herein may be implemented in hardware, software, firmware, or any combination thereof in an encoding device. If implemented in software, the software may be executed in a digital signal processor (DSP) or another device that performs fixed-point operations. In this case, the software that executes the technology may be initially stored in a computer-readable medium and loaded and executed in the DSP for precise point calculation of floating-point operations in the encoding device.Additional details of various embodiments are set forth in the drawings and the following description. Other features, objects, and advantages will be easily understood from the description and drawings and from the claims.BRIEF DESCRIPTIONFIG. 1 is a block diagram of an exemplary encoding device suitable for implementing the techniques described herein.FIG. 2 is a conceptual diagram for explaining a DC predicted video block.FIG. 3 is a conceptual diagram for explaining AC predicted video blocks.4 to 6 are flowcharts according to embodiments of the present invention.detailed descriptionThe present invention describes techniques implemented by the encoding device during AC / DC prediction. AC / DC prediction is generally a process used in intra-coding techniques, where another video block within a given video frame or image is identified for intra-coding the current video block. The technique involves fixed-point calculations (A // B) of floating-point operations in an encoding device, where A // B represents integer division of A divided by B and rounded to the nearest integer. Half-integer values are rounded away from zero.AC / DC prediction according to coding standards such as MPEG-4 and other standards requires integer division floating point calculations (A // B), which (in terms of processing loops) are performed in fixed-point devices such as digital signal processors (DSP) Duplication may be difficult or costly. Importantly, the techniques described herein ensure computational accuracy for every possible input combination that may be encountered during AC / DC prediction according to video encoding standards such as MPEG-4. This is very important for video coding, because an erroneous prediction can lead to error propagation that corrupts the video coding. Moreover, relative to conventional techniques or implementations, the techniques described herein can reduce the number of processing cycles that the DSP uses to produce the correct result of floating-point operations.More specifically, the technique involves precise point calculation (A // B) of floating-point operations (A // B) in the encoding device for all possible input parameters of operations that may be encountered in AC / DC prediction, where A // B represents Integer division of A divided by B and rounded to the nearest integer. For example, 3 // 2 is rounded to 2, and -3 // 2 is rounded to -2. The technique can be used for DC prediction, in which case A represents the unquantized DC coefficients of the video block to be encoded, and B represents the DC scalar value used to quantize the DC coefficients. In addition, the same technique can also be used for AC prediction, in which case A represents the product of the quantized AC coefficients and the quantization parameter of the candidate prediction block, and B represents the quantization parameter of the video block to be encoded.As described in more detail below, a look-up table (LUT) is used in the calculation. The LUT has entries generated as an inverse function of exponent B. Index B defines the range of values that contains each DC scalar value and each quantization parameter associated with the coding standard. For example, for the MPEG-4 standard, B may have a range of [1, 46], which covers each DC scalar value and each quantization parameter associated with MPEG-4. Specifically, in MPEG-4, the DC scalar value falls within the range of [8, 46], and the quantization parameter falls within the range of [1, 31]. Therefore, an inverse LUT with a value in the range [1, 46] covers all possible denominators for A // B calculation in AC / DC prediction for MPEG-4.In order to accurately estimate the result of the floating-point operation A // B, the selected items of the LUT can be divided into components B1 and B2, which contain the high and low parts of the selected items of the LUT. Fixed-point calculations of floating-point operations (A // B) can include the results given by:(((B1 * A) << 1) + ((B2 * A) >> 15) +32768) >> 16Where << indicates left shift operation,>> indicates right shift operation, and 32768 indicates a constant used to ensure rounding to the nearest integer.In addition, some DSP calculations can be simplified by performing fixed-point calculations (A // B) of floating-point operations as a result given by the following formula:((B1 * C) + ((B2 * C) >> 16) +32768) >> 16Among them, << indicates left shift operation, >> indicates right shift operation, C indicates (2 * A), and 32768 indicates a constant used to ensure rounding to the nearest integer. These equations usually assume that the Q number associated with the LUT is Q31, which means that all values of the entry represent decimal values.For other implementations of LUTs that have a smaller Q number so that some values of the entry are not fractional, more general equations can be used, such as:(((B1 * A) <<< (32-QNumber)) + ((B2 * A) >> (QNumber-16)) + 32768) >> 16or((B1 * A) + ((B2 * A >> 16) + (1 << (QNumber-17)))) >> (QNumber-16)Similar to the Q number 31, for any Q number less than or equal to 28, the fixed-point DSP implementation can also proceed as follows:((B1 * C) + ((B2 * C) >> 16) +32768) >> 16This can be easier to implement in most DSPs.FIG. 1 is a block diagram illustrating an exemplary encoding device 10 including a pointing device. The encoding device 10 generally refers to any encoding device that uses AC / DC prediction as part of an intra-prediction encoding technique. Thus, although the device 10 is illustrated as including both the inter prediction encoder 14 and the intra prediction encoder 16, the techniques described herein are generally applicable to the intra prediction encoder 16, and therefore can be performed without performing inter prediction Encoded in the device. For example, the technology can also be used for image compression in digital cameras or other imaging devices. The encoders 14, 16 are usually fixed-point devices, such as one or more DSPs.In the example of FIG. 1, the encoding device 10 is a video encoding device. Examples of video encoding devices include digital televisions, digital video cameras, digital direct broadcast systems, wireless communication devices, personal digital assistants (PDAs), laptop computers, desktop computers, digital recording devices, cellular or satellite radiotelephones, and similar devices . In general, any device that performs the encoding techniques described herein may be an encoding device. However, the technique is most suitable for smaller devices that implement a digital signal processor (DSP) or another device that does not perform floating-point calculations.As illustrated in FIG. 1, the encoding device 10 includes a memory 12 coupled to the intra prediction encoder 14 and the intra prediction encoder. The memory 12 may include any volatile or non-volatile storage elements. In some cases, the memory 12 may include both on-chip and off-chip memory. For example, the memory 12 may include a relatively large off-chip memory space for storing video sequences, and a smaller and faster local on-chip memory used during the encoding process. In this case, the off-chip memory may include dynamic random access memory (DRAM) or flash memory, and the local on-chip memory may include synchronous dynamic random access memory (SDRAM). However, for simplicity, a single memory 12 is illustrated to represent any number of memory elements that can be used to facilitate video encoding.Each of the encoders 14 and 16 may form part of an integrated encoder / decoder (CODEC), or may include only encoding or decoding elements. In any case, the encoders 14 and 16 may be common or separate in hardware, software, firmware, one or more digital signal processors (DSPs), microprocessors, application specific integrated circuits (ASICs), field programmable gate arrays ( FPGA), discrete hardware components, or various combinations thereof. Encoders 14 and 16 include, at least in part, fixed-point devices such as DSPs, making the techniques described herein for computing floating-point operations in a fixed-point manner applicable.The inter prediction encoder 14 generally refers to an encoding element that performs inter correlation for video sequence compression. For example, inter prediction encoder 14 may perform motion estimation and motion compensation techniques in order to achieve inter compression of video sequences. Motion estimation refers to the process of comparing the current video block with the video blocks of other frames in order to identify the "best predicted" video block. Once the "best prediction" is identified for the current video block during motion estimation, the inter-encoder 14 can use motion compensation to encode the difference between the current video block and the best prediction. Motion compensation includes the process of creating a difference block that can indicate the difference between the current video block to be encoded and the best prediction. In particular, motion compensation generally refers to the action of using a motion vector to obtain the best prediction block and then subtracting the best prediction from the input block to produce a difference block. The difference block usually contains significantly less data than the original video block represented by the difference block.After motion compensation has created the difference block, inter prediction encoder 14 may also perform a series of additional encoding steps to further encode the difference block and further compress the data. These additional coding steps may depend on the coding standard used. For example, in an MPEG-4 compliant encoder, the additional encoding step may include 8x8 discrete cosine transform, followed by scalar quantization, followed by raster-to-Z reordering, followed by run-length encoding, followed by Huffman encoding. The encoded difference block may be transmitted by the encoding device 10 along with a motion vector indicating which video block from the previous frame (or subsequent frame) was used for encoding (the transmitter is not illustrated for simplicity).The intra prediction encoder 16 includes an encoding element that performs intra correlation to achieve intra compression. The intra prediction encoder 16 may be used alone to achieve intra or image compression, but is more commonly implemented with the inter prediction encoder 14 as an additional part of video sequence compression. For MPEG-4 compliant video sequence encoding, the intra-prediction encoder 16 can be called after scalar quantization and before raster to Z-shaped reordering. Again, the intra-prediction process includes an AC / DC prediction process: identifying adjacent video blocks to be used in intra-coding, followed by an encoding step that uses the identified video blocks for intra-compression. The technique described herein is applicable to the AC / DC prediction process performed by the intra-prediction encoder 16.The intra prediction encoder 16 includes a mode selection engine 17, which selects a mode to be used for AC / DC prediction, for example, selects AC mode or DC mode for prediction. The AC / DC prediction unit 18 then performs AC prediction or DC prediction according to the selected mode. In addition, according to the present invention, the AC / DC prediction unit 18 calls the fixed-point calculation unit 15 in order to efficiently and effectively perform the floating-point operation of integer division rounded to the nearest integer in the fixed-point device. As described herein, the fixed-point calculation unit 15 selects an entry from a look-up table (LUT) 19 that can be stored in the memory 12 and applies an equation that ensures that the fixed-point calculation is relevant for AC / DC prediction. The results of the floating-point operations of the input parameters of the link match exactly. In this way, the encoding device 10 is implemented as a fixed-point device such as a DSP, but can efficiently and effectively calculate the results of floating-point operations required for AC / DC prediction.The possible modes selectable by the mode selection engine 17 may be different according to different video encoding standards. According to MPEG-4, there are two general modes: DC mode and AC mode. In addition, each mode may have horizontal and vertical directions. The DC mode uses only DC coefficients, while the AC mode can use DC and AC coefficients. The AC mode may alternatively be referred to as AC + DC mode because DC coefficients and AC coefficients are used.In a relatively simple example, if the DC difference between blocks B and A (see Figure 2) is less than the DC difference between blocks B and C, then the vertical mode is used. If the DC difference between blocks B and C is smaller than the DC difference between blocks B and A (see Figure 2), then the horizontal mode is used. Once the direction of motion is determined, the decision to determine whether to use the DC-only mode or the AC + DC mode in the selected direction can be determined for encoding based on which mode is best for a given situation. However, these model choices are prone to many changes. For decoding, the pattern will be identified by the bits or flags in the bitstream. In general, any mode can be used for intra prediction, but by selecting a specific mode in some cases, the quality of the intra coding process can be improved. Moreover, the techniques described herein may allow fixed-point calculations for all possible floating-point operations that may be encountered in AC / DC prediction.FIG. 2 is a conceptual diagram for explaining a DC predicted video block. In this example, the mode selection engine 17 may have selected the DC mode as the mode for intra prediction. As shown in FIG. 2, the AC / DC prediction unit 18 performs DC prediction based on the DC component of one of the neighboring video blocks A or C. In DC prediction according to MPEG-4, calculation F [0] [0] // dc_scaler is performed, where F [0] [0] is the unquantized coefficient of DC, and dc_scaler is the quantized coefficient F [0] [ 0] DC scalar value. Through the saturation check, for MPEG-4, F [0] [0] is in the range of [-2048, 2047], and dc_scaler is in the range of [8, 46].LUT 19 stores entries containing the inverse function of all possible values of dc_scalar. The fixed-point calculation unit 15 selects the appropriate entry corresponding to the used dc_scalar from the LUT 19. The fixed-point calculation unit 15 then applies the selected item with the coefficient F [0] [0] according to an equation that separates the lower half of the selected item and the upper half of the selected item in order to simplify the overall calculation. For example, in this case, the fixed-point calculation unit 15 may apply equations, such as:(((B1 * F [0] [0]) << 1) + ((B2 * F [0] [0]) >> 15) +32768) >> 16In order to obtain the result of F [0] [0] // dc_scaler. In this equation, B1 and B2 represent the lower half of the selected item and the upper half of the selected item, respectively, <<< means left shift operation, >>> means right shift operation, and 32768 means to ensure rounding to The constant of the nearest integer.The LUT 19 stores an inverse entry that not only corresponds to the inverse function of all possible values of dc_scalar, but also corresponds to the inverse function of all possible entries of quantization parameters that may be used according to the coding standard. For MPEG-4, the quantization parameter has a range of [1, 31]. Therefore, if LUT 19 stores the anti-entry across the range [1, 46], then LUT 19 may represent the anti-entry corresponding to all possible values of dc_scalar and all possible values of the quantization parameter. Therefore, the same LUT 19 can be called to mimic floating point calculations for AC prediction and DC prediction.FIG. 3 is a conceptual diagram for explaining a video block of AC prediction (for example, AC part of AC + DC prediction). In this example, the mode selection engine 17 has selected the AC mode as the mode to be used for intra prediction. As shown in FIG. 3, the AC / DC prediction unit 18 performs AC prediction based on the vertical AC components c1-c7 of block A or the horizontal AC components r1-r7 of block C. Moreover, the DC component can also be used for AC + DC prediction, but this is shown separately in FIG. 2 for simplicity.In AC prediction according to MPEG-4, calculation (QFP * QPP) // QPX is performed for each AC coefficient of the predicted video block. Furthermore, if video block A (FIG. 3) is used for prediction, then the AC coefficients used are c1-c7 of block A, and if block B is used for prediction, then AC coefficients r1-r7 of block B are used for prediction. QFP represents a given quantized AC coefficient of the block being used for prediction, and QPP represents the quantization parameter associated with the block used. The product (QFP * QPP) is not exactly the same as the unquantized AC coefficient, but as specified by the MPEG-4 standard, this value is limited by half of the unquantized AC coefficient, which can be fully included in the range of [-2048, 2047] Inside. QPX represents the quantization parameter associated with the block X (FIG. 3) to be encoded, and falls within the range of [1, 31] according to MPEG-4.Because the LUT 19 stores entries containing the inverse function of all possible values of QPX, the fixed-point calculation unit 15 selects the appropriate entry from the LUT 19 corresponding to the QPX being used. The fixed-point calculation unit 15 then applies the selected item with a product (QFP * QPP) according to an equation that separates the lower half of the selected item and the upper half of the selected item in order to simplify the overall calculation. For example, in this case, the fixed-point calculation unit 15 may apply equations, such as:(((B1 * (QFP * QPP)) << 1) + ((B2 * (QFP * QPP)) >> 15) +32768) >> 16In order to obtain the results of (QFP * QPP) // QPX. In this equation, B1 and B2 represent the lower half of the selected item and the upper half of the selected item, respectively, <<< means left shift operation, >>> means right shift operation, and 32768 means to ensure rounding to The constant of the nearest integer.In general, the purpose of the technique described here is to facilitate the calculation of a good fixed-point approximation of the inverse of "A // B", where the number A is in the range [-2048, 2047] and the number B is in [1, 46] Within range. Again, the technique can also simplify such calculations and possibly reduce the number of processing cycles required to perform calculations in fixed-point devices.LUT 19 can define the Q number corresponding to Q31, which means that all the values of the entries of LUT 19 correspond to the score values. Calculate each entry in the table as floor (2 ^ 31 / B) +1, where B is the index of the table and is within the range of [1, 46]. The operation "floor" indicates an operation for rounding down to an integer. The following exemplary pseudocode can be used to generate the table:unsigned q31Num, i; / * define unsigned integer * /unsignedintInvTable [46]; / * definition array * /q31Num = 1 << 31; · · · / Execution 2 ^ 31 * /for (i = 1; i <= 46; i ++) / * Iterate to produce each entry * /{vlhverseTable [i-1] = q31 Num / i + 1;}The simulation example produced by LUT19 is as follows:vEncInverseTable [46] = {0x80000001, 0x40000001, 0x2AAAAAAB, 0x20000001, 0x1999999a,0x15555556, 0x12492493, 0x10000001, 0x0E38E38f, 0x0CCCCCCD,0x0BA2E8BB, 0x0AAAAAAB, 0x09D89D8A, 0x0924924A, 0x08888889,0x08000001, 0x07878788, 0x071C71C8, 0x06BCA1B0, 0x06666667,0x06186187, 0x05D1745E, 0x0590B217, 0x05555556, 0x051EB852,0x04EC4EC5, 0x04BDA130, 0x04924925, 0x0469EE59, 0x04444445,0x04210843, 0x04000001, 0x03E0F83F, 0x03C3C3C4, 0x03A83A84,0x038E38E4, 0x03759F23, 0x035E50D8, 0x03483484, 0x03333334,0x031F3832, 0x030C30C4, 0x02FA0BE9, 0x02E8BA2F, 0x02D82D83,0x02C8590C};Simulations have shown that for each possible number with a numerator ([-2048, 2047]) and a denominator ([1, 46]), the results using the inverse tables and equations described in this article and the numerator and denominator for these ranges The corresponding floating point calculations of all possible combinations of. Therefore, the inverse tables and equations described herein are accurate for the purpose of AC / DC prediction according to MPEG-4. Again, fixed-point calculations of floating-point operations (A // B) may include the results given by the following formula:(((B1 * A) << 1) + ((B2 * A) >> 15) +32768) >> 16Where << indicates left shift operation,>> indicates right shift operation, and 32768 indicates a constant used to ensure rounding to the nearest integer.In addition, some DSP calculations can be simplified by performing fixed-point calculations (A // B) of floating-point operations as a result given by the following formula:((B1 * C) + ((B2 * C) >> 16) +32768) >> 16Among them, << indicates left shift operation, >> indicates right shift operation, C indicates (2 * A), and 32768 indicates a constant used to ensure rounding to the nearest integer. Since C = 2 * A is within 16 bits ([-4096, 4094]), there is no loss of accuracy due to * 2. In addition, C = 2 * A may be generated during idle cycles in some DSPs. These equations usually assume that the Q number associated with the LUT is Q31, which means that all values of the entry represent decimal values.The following are examples of inverse tables that can be used with Q numbers Q30-Q18 having a range of numerator ([-2048, 2047]) and denominator ([1, 46]). These tables may be used for different DSPs and different instruction sets.vEncInverseTable_Q30 = {0x40000001, 0x20000001, 0x15555556, 0x10000001, 0xccccccd,0xaaaaaab, 0x924924a, 0x8000001, 0x71c71c8, 0x6666667,0x5d1745e, 0x5555556, 0x4ec4ec5, 0x4924925, 0x4444445,0x4000001, 0x3c3c3c4, 0x38e38e4, 0x35e50d8, 0x3333334,0x30c30c4, 0x2e8ba2f, 0x2c8590c, 0x2aaaaab, 0x28f5c29,0x2762763, 0x25ed098, 0x2492493, 0x234f72d, 0x2222223,0x2108422, 0x2000001, 0x1f07c20, 0x1e1e1e2, 0x1d41d42,0x1c71c72, 0x1bacf92, 0x1af286c, 0x1a41a42, 0x199999a,0x18f9c19, 0x1861862, 0x17d05f5, 0x1745d18, 0x16c16c2,0x1642c86};vEncInverseTable_Q29 = {0x20000001, 0x10000001, 0xaaaaaab, 0x8000001, 0x6666667,0x5555556, 0x4924925, 0x4000001, 0x38e38e4, 0x3333334,0x2e8ba2f, 0x2aaaaab, 0x2762763, 0x2492493, 0x2222223,0x2000001, 0x1e1e1e2, 0x1c71c72, 0x1af286c, 0x199999a,0x1861862, 0x1745d18, 0x1642c86, 0x1555556, 0x147ae15,0x13b13b2, 0x12f684c, 0x124924a, 0x11a7b97, 0x1111112,0x1084211, 0x1000001, 0xf83e10, 0xf0f0f1, 0xea0ea1,0xe38e39, 0xdd67c9, 0xd79436, 0xd20d21, 0xcccccd,0xc7ce0d, 0xc30c31, 0xbe82fb, 0xba2e8c, 0xb60b61,0xb21643};vEncInverseTable_Q28 = {0x10000001, 0x8000001, 0x5555556, 0x4000001, 0x3333334,0x2aaaaab, 0x2492493, 0x2000001, 0x1c71c72, 0x199999a,0x1745d18, 0x1555556, 0x13b13b2, 0x124924a, 0x1111112,0x1000001, 0xf0f0f1, 0xe38e39, 0xd79436, 0xcccccd,0xc30c31, 0xba2e8c, 0xb21643, 0xaaaaab, 0xa3d70b,0x9d89d9, 0x97b426, 0x924925, 0x8d3dcc, 0x888889,0x842109, 0x800001, 0x7c1f08, 0x787879, 0x750751,0x71c71d, 0x6eb3e5, 0x6bca1b, 0x690691, 0x666667,0x63e707, 0x618619, 0x5f417e, 0x5d1746, 0x5b05b1,0x590b22};vEncInverseTable_Q27 = {0x8000001, 0x4000001, 0x2aaaaab, 0x2000001, 0x199999a,0x1555556, 0x124924a, 0x1000001, 0xe38e39, 0xcccccd,0xba2e8c, 0xaaaaab, 0x9d89d9, 0x924925, 0x888889,0x800001, 0x787879, 0x71c71d, 0x6bca1b, 0x666667,0x618619, 0x5d1746, 0x590b22, 0x555556, 0x51eb86,0x4ec4ed, 0x4bda13, 0x492493, 0x469ee6, 0x444445,0x421085, 0x400001, 0x3e0f84, 0x3c3c3d, 0x3a83a9,0x38e38f, 0x3759f3, 0x35e50e, 0x348349, 0x333334,0x31f384, 0x30c30d, 0x2fa0bf, 0x2e8ba3, 0x2d82d9,0x2c8591};vEncInverseTable_Q26 = {0x4000001, 0x2000001, 0x1555556, 0x1000001, 0xcccccd,0xaaaaab, 0x924925, 0x800001, 0x71c71d, 0x666667,0x5d1746, 0x555556, 0x4ec4ed, 0x492493, 0x444445,0x400001, 0x3c3c3d, 0x38e38f, 0x35e50e, 0x333334,0x30c30d, 0x2e8ba3, 0x2c8591, 0x2aaaab, 0x28f5c3,0x276277, 0x25ed0a, 0x24924a, 0x234f73, 0x222223,0x210843, 0x200001, 0x1f07c2, 0x1e1e1f, 0x1d41d5,0x1c71c8, 0x1bacfa, 0x1af287, 0x1a41a5, 0x19999a,0x18f9c2, 0x186187, 0x17d060, 0x1745d2, 0x16c16d,0x1642c9};vEncInverseTable_Q25 = {0x2000001, 0x1000001, 0xaaaaab, 0x800001, 0x666667,0x555556, 0x492493, 0x400001, 0x38e38f, 0x333334,0x2e8ba3, 0x2aaaab, 0x276277, 0x24924a, 0x222223,0x200001, 0x1e1e1f, 0x1c71c8, 0x1af287, 0x19999a,0x186187, 0x1745d2, 0x1642c9, 0x155556, 0x147ae2,0x13b13c, 0x12f685, 0x124925, 0x11a7ba, 0x111112,0x108422, 0x100001, 0xf83e1, 0xf0f10, 0xea0eb,0xe38e4, 0xdd67d, 0xd7944, 0xd20d3, 0xccccd,0xc7ce1, 0xc30c4, 0xbe830, 0xba2e9, 0xb60b7,0xb2165};vEncInverseTable_Q24 = {0x1000001, 0x800001, 0x555556, 0x400001, 0x333334,0x2aaaab, 0x24924a, 0x200001, 0x1c71c8, 0x19999a,0x1745d2, 0x155556, 0x13b13c, 0x124925, 0x111112,0x100001, 0xf0f10, 0xe38e4, 0xd7944, 0xccccd,0xc30c4, 0xba2e9, 0xb2165, 0xaaaab, 0xa3d71,0x9d89e, 0x97b43, 0x92493, 0x8d3dd, 0x88889,0x84211, 0x80001, 0x7c1f1, 0x78788, 0x75076,0x71c72, 0x6eb3f, 0x6bca2, 0x6906a, 0x66667,0x63e71, 0x61862, 0x5f418, 0x5d175, 0x5b05c,0x590b3};vEncInverseTable_Q23 = {0x800001, 0x400001, 0x2aaaab, 0x200001, 0x19999a,0x155556, 0x124925, 0x100001, 0xe38e4, 0xccccd,0xba2e9, 0xaaaab, 0x9d89e, 0x92493, 0x88889,0x80001, 0x78788, 0x71c72, 0x6bca2, 0x66667,0x61862, 0x5d175, 0x590b3, 0x55556, 0x51eb9,0x4ec4f, 0x4bda2, 0x4924a, 0x469ef, 0x44445,0x42109, 0x40001, 0x3e0f9, 0x3c3c4, 0x3a83b,0x38e39, 0x375a0, 0x35e51, 0x34835, 0x33334,0x31f39, 0x30c31, 0x2fa0c, 0x2e8bb, 0x2d82e,0x2c85a};vEncInverseTable_Q22 = {0x400001, 0x200001, 0x155556, 0x100001, 0xccccd,0xaaaab, 0x92493, 0x80001, 0x71c72, 0x66667,0x5d175, 0x55556, 0x4ec4f, 0x4924a, 0x44445,0x40001, 0x3c3c4, 0x38e39, 0x35e51, 0x33334,0x30c31, 0x2e8bb, 0x2c85a, 0x2aaab, 0x28f5d,0x27628, 0x25ed1, 0x24925, 0x234f8, 0x22223,0x21085, 0x20001, 0x1f07d, 0x1e1e2, 0x1d41e,0x1c71d, 0x1bad0, 0x1af29, 0x1a41b, 0x1999a,0x18f9d, 0x18619, 0x17d06, 0x1745e, 0x16c17,0x1642d};vEncInverseTable_Q21 = {0x200001, 0x100001, 0xaaaab, 0x80001, 0x66667,0x55556, 0x4924a, 0x40001, 0x38e39, 0x33334,0x2e8bb, 0x2aaab, 0x27628, 0x24925, 0x22223,0x20001, 0x1e1e2, 0x1c71d, 0x1af29, 0x1999a,0x18619, 0x1745e, 0x1642d, 0x15556, 0x147af,0x13b14, 0x12f69, 0x12493, 0x11a7c, 0x11112,0x10843, 0x10001, 0xf83f, 0xf0f1, 0xea0f,0xe38f, 0xdd68, 0xd795, 0xd20e, 0xcccd,0xc7cf, 0xc30d, 0xbe83, 0xba2f, 0xb60c,0xb217};vEncInverseTable_Q20 = {0x100001, 0x80001, 0x55556, 0x40001, 0x33334,0x2aaab, 0x24925, 0x20001, 0x1c71d, 0x1999a,0x1745e, 0x15556, 0x13b14, 0x12493, 0x11112,0x10001, 0xf0f1, 0xe38f, 0xd795, 0xcccd,0xc30d, 0xba2f, 0xb217, 0xaaab, 0xa3d8,0x9d8a, 0x97b5, 0x924a, 0x8d3e, 0x8889,0x8422, 0x8001, 0x7c20, 0x7879, 0x7508,0x71c8, 0x6eb4, 0x6bcb, 0x6907, 0x6667,0x63e8, 0x6187, 0x5f42, 0x5d18, 0x5b06,0x590c};vEncInverseTable_Q19 = {0x80001, 0x40001, 0x2aaab, 0x20001, 0x1999a,0x15556, 0x12493, 0x10001, 0xe38f, 0xcccd,0xba2f, 0xaaab, 0x9d8a, 0x924a, 0x8889,0x8001, 0x7879, 0x71c8, 0x6bcb, 0x6667,0x6187, 0x5d18, 0x590c, 0x5556, 0x51ec,0x4ec5, 0x4bdb, 0x4925, 0x469f, 0x4445,0x4211, 0x4001, 0x3e10, 0x3c3d, 0x3a84,0x38e4, 0x375a, 0x35e6, 0x3484, 0x3334,0x31f4, 0x30c4, 0x2fa1, 0x2e8c, 0x2d83,0x2c86};vEncInverseTable_Q18 = {0x40001, 0x20001, 0x15556, 0x10001, 0xcccd,0xaaab, 0x924a, 0x8001, 0x71c8, 0x6667,0x5d18, 0x5556, 0x4ec5, 0x4925, 0x4445,0x4001, 0x3c3d, 0x38e4, 0x35e6, 0x3334,0x30c4, 0x2e8c, 0x2c86, 0x2aab, 0x28f6,0x2763, 0x25ee, 0x2493, 0x2350, 0x2223,0x2109, 0x2001, 0x1f08, 0x1e1f, 0x1d42,0x1c72, 0x1bad, 0x1af3, 0x1a42, 0x199a,0x18fa, 0x1862, 0x17d1, 0x1746, 0x16c2,0x1643};For these implementations of LUTs that have a small Q number such that some values of the entry are not fractions, more general equations can be used, such as:(((B1 * A) <<< (32-QNumber)) + ((B2 * A) >> (QNumber-16)) + 32768) >> 16or((B1 * A) + ((B2 * A >> 16) + (1 << (QNumber-17)))) >> (QNumber-16)Similarly, as with the Q number 31, for any Q number less than or equal to 28, the fixed-point DSP implementation can also proceed as follows:((B1 * C) + ((B2 * C) >> 16) +32768) >> 16This is more convenient for programmers in most DSPs.4 is a flowchart illustrating a technique of performing floating-point calculation for AC / DC prediction in a fixed-point device such as a DSP. As shown in FIG. 4, the fixed-point calculation unit 15 selects an entry (41) from a look-up table (LUT) 19 having an entry generated as an inverse function of B. The fixed-point calculation unit 15 then calculates integer division A // B based on the value of A and the high and low parts of the selected entry (42). For example, the fixed-point calculation unit 15 may apply the equation:(((B1 * A) << 1) + (B2 * A) >> 15) +32768) >> 16Among them, << indicates left shift operation, >> indicates right shift operation, 32768 indicates a constant to ensure rounding to the nearest integer, and B1 and B2 include the high and low parts of the selected item of the LUT, respectively. The AC / DC prediction unit 18 may then use the calculation of A // B to perform AC // DC prediction (43).FIG. 5 is a flowchart illustrating a more specific technique of performing floating-point calculation for DC prediction in a fixed-point device such as a DSP. As shown in FIG. 5, the fixed-point calculation unit 15 selects an entry (51) from a look-up table (LUT) 19 having entries generated as an inverse function of possible DC scalars. The fixed-point calculation unit 15 then calculates integer division F [0] [0] // (DC scalar) based on the value of the DC scalar and the high and low parts of the selected entry (52). In this case, F [0] [0] is an unquantized coefficient of DC, and (DC scalar) is a DC scalar value for quantizing the coefficient F [0] [0].For example, the fixed-point calculation unit 15 may apply the equation:(((B1 * F [0] [0]) << 1) + ((B2 * F [0] [0]) >> 15) +32768) >> 16In order to obtain the result of F [0] [0] // (DC scalar). In this equation, B1 and B2 represent the lower half of the selected item and the upper half of the selected item, respectively, <<< means left shift operation, >>> means right shift operation, and 32768 means to ensure rounding to The constant of the nearest integer. The AC / DC prediction unit 18 may then use the calculation of F [0] [0] // (DC scalar) to perform DC prediction (53).6 is a flowchart illustrating a more specific technique of performing floating-point calculation for AC prediction in a fixed-point device such as a DSP. As shown in FIG. 6, the fixed-point calculation unit 15 selects an entry (61) from a look-up table (LUT) 19 having an entry generated as an inverse function of the possible quantization parameter QPx of the video block X. For each AC coefficient used in AC prediction, the fixed-point calculation unit then calculates integer division (QFP * QPP) // QPx based on the values of QFP and QPP and the high and low portions of the selected entry corresponding to QPx (62). In this case, QFP represents a given quantized AC coefficient of the block used for prediction, and QPx represents the quantization parameter associated with the used block.For example, the fixed-point calculation unit 15 may apply the equation:(((B1 * (QFP * QPP)) << 1) + ((B2 * (QFP * QPP)) >> 15) +32768) >> 16In order to obtain the result of (QFP * QPP) // QPx. In this equation, B1 and B2 represent the lower half of the selected item and the upper half of the selected item, respectively, <<< means left shift operation, >>> means right shift operation, and 32768 means to ensure rounding to The constant of the nearest integer.Compared with other methods of simulating floating-point calculations with the accuracy required for all individual inputs that may be encountered in MPEG-4 AC / DC prediction, the techniques described herein can significantly reduce the number of DSP processing cycles. Advantageously, the techniques described herein can function without the need for symbol checking or other pre-processing or post-processing steps.Many different embodiments have been described. In particular, techniques for simulating floating-point calculations have been described that have the required accuracy for all individual inputs that may be encountered in MPEG-4 AC / DC prediction. Furthermore, the technique can reduce the number of processing cycles required to perform such calculations relative to other methods. However, various modifications can be made to the technology described herein without departing from the spirit and scope of the invention. For example, the techniques may be suitable for use with such other coding standards, for example, by modifying the lookup table to cover all possible inputs for other standards. The technique is generally described as suitable for video encoding, which means that the technique can be applied to video encoding, video decoding, or both encoding and decoding.The techniques described herein may be implemented in hardware, software, firmware, or any combination thereof in an encoding device. If implemented in software, the software may be executed in a digital signal processor (DSP) or another device that performs fixed-point operations. In this case, the software that executes the technology may be initially stored in a computer-readable medium and loaded and executed in the DSP for precise point calculation of floating-point operations in the encoding device. For example, computer-readable media may include random access memory (RAM), read-only memory (ROM), non-volatile random access memory (NVRAM), electrically Erase programmable read-only memory (EEPROM), flash memory and the like. These and other embodiments are within the scope of the following claims. |
Methods and apparatuses associated with sharing cryptographic keys (160) in a network domain (140). An embedded agent (121, 131) on a network endpoint (120, 130) participates in the distribution of cryptographic keys. In one embodiment the embedded agent receives and stores a shared symmetric key, as do embedded agents on other network endpoints in the same network domain. The embedded agent causes the shared key to be stored in a secure storage not directly accessible by the host. When the host wants to transmit enciphered data, the embedded agent may provide access to cryptographic services. The embedded agent provides isolation of the shared key from parts of the host that are subject to compromise by attack or infection. |
CLAIMSWhat is claimed is:1. A method comprising: provisioning a symmetric cryptographic key across multiple clients through multiple embedded agents, each client having one of the embedded agents, one embedded agent in each client having an embedded agent to store the symmetric cryptographic key in a storage accessible to the embedded agent and not directly accessible to a host processor on the client; and providing access to an encrypted traffic flow in a network to a client if the client is authenticated with the key.2. A method according to claim 1, wherein provisioning the key through the embedded agents further comprises provisioning the key through an embedded agent having network access via a network link not visible to a host operating system (OS) running on the client. 3. A method according to claim 2, wherein providing access to the traffic flow if the client is authenticated comprises the embedded agent authenticating the client over the network line not visible to the host OS.4. A method according to claim 1, wherein providing access to the traffic flow further comprises providing multiple clients access with the key to nodes in the network, the nodes in the network to decrypt the traffic flow and subsequently encrypt the traffic flow to transmit the traffic to a next node in the network.5. A method according to claim 1, further comprising updating at a client the symmetric cryptographic key provisioned across the multiple clients through a public and private key exchange with a public and private key associated the client.6. A method according to claim 1, wherein providing access if the client is authenticated further comprises: the embedded agent verifying that a platform associated with the client is not compromised; and the embedded agent providing the key and an assertion that the client is not compromised to a verification entity on the network.7. A method according to claim 6, further comprising the embedded agent indicating to a remote network device if the client is compromised.8. A method according to claim 6, further comprising the embedded agent foreclosing network access to the client if the client is compromised.9. A method according to claim 1, further comprising the embedded agent performing cryptographic functions on data with the key to authenticate data with the key.10. A method according to claim 1, further comprising the embedded agent including a derivative of the key in a header of data to be transmitted to authenticate the data with the key.11. An apparatus comprising: a host platform on the apparatus including a host processor; a secure memory not visible to applications and an operating system (OS) running on the host platform; and an embedded computational device communicatively coupled with the host platforai, the embedded device to have a network link transparent to the OS, the embedded device to manage a cryptographic key shared among the apparatus and network endpoints to be used to communicate with a server over the network, to receive the cryptographic key on the transparent link and authenticate the apparatus, and to store the cryptographic key in the secure memory.12. An apparatus according to claim 11, wherein the embedded device to have transparent network link comprises the embedded device to have a network connection not accessible by the host platform, the link to comply with the transport layer security (TLS) protocol. 13. An apparatus according to claim 11, wherein the embedded device to have a transparent network link comprises the embedded device to have a network connection not accessible by the host platform, the link to comply with the secure sockets layer (SSL) protocol.14. An apparatus according to claim 11, wherein the embedded device to authenticate the apparatus comprises the embedded device to verify the identity of the apparatus to a network switching device with the key, the key to also be used by the network endpoints to verify their respective identities to the network switching device, and the network switching device to decrypt encrypted traffic from the apparatus and the network endpoints. 15. An apparatus according to claim 11, wherein the embedded device to authenticate the apparatus comprises the embedded device to hash traffic to be transmitted with the key.16. An apparatus according to claim 11, wherein the embedded device to authenticate the apparatus comprises the embedded device to perform cryptographic services with the key on traffic to be transmitted.17. An apparatus according to claim 11, wherein the embedded device to authenticate the apparatus comprises the embedded device to include a derivative of the key in a header of traffic to be transmitted.18. An apparatus according to claim 11, further comprising a second embedded computational device, the second embedded device integrated on the host platform, to verify the security of the host platform.19. An apparatus according to claim 18, wherein the first embedded device does not authenticate the apparatus if the second embedded device determines the host platform is not secure.20. An apparatus according to claim 18, further comprising a bi-directional private bus between the first and second embedded devices.21. An apparatus according to claim 11, further comprising a counter mode hardware cryptographical module on the host platform to encipher traffic with the cryptographic key and further provide a counter mode enciphering of the enciphered traffic.22. A system comprising: a host platform including a host processor; a digital signal processor (DSP) coupled with the host platform; and an embedded chipset including a secure key storage module to perform cryptographic key management of a shared cryptographic key with the secure key storage module and a private communication channel accessible to the chipset and not the host platform, and to access the image of the host platform on the flash to determine the integrity of the host platform, the shared cryptographic key to be used by the host platform to encipher data and other networked devices within a virtual private network.23. A system according to claim 22, wherein the embedded chipset to perform cryptographic key distribution with the private communication channel comprises the embedded chipset to perform cryptographic key distribution with a communication channel complying with the transport layer security (TLS) protocol.24. A system according to claim 22, wherein the embedded chipset comprises an embedded controller agent and an embedded firmware agent, the firmware agent to determine the integrity of the host platform, and the controller agent to operate the private communication channel and manage access by the host platform to secure network connections.25. A system according to claim 24, further comprising a bi-directional private communication path between the first and second embedded devices to allow the devices to interoperate outside the awareness of the host platform.26. A system according to claim 22, further comprising the embedded chipset to hash traffic to be transmitted with the key to authenticate the system to one of the other networked devices.27. A system according to claim 22, further comprising the embedded chipset to perform cryptographic services with the key on traffic to be transmitted to authenticate the system to one of the other networked devices. 28. A system according to claim 22, further comprising the embedded chipset to include a derivative of the key in a header of traffic to be transmitted to authenticate the system to one of the other networked devices.29. An article of manufacture comprising a machine accessible medium having content to provide instructions to cause a machine to perform operations including: provisioning a symmetric cryptographic key across multiple clients through multiple embedded agents, each client having one of the embedded agents, one embedded agent in each client having an embedded agent to store the symmetric cryptographic key in a storage accessible to the embedded agent and not directly accessible to a host processor on the client; and providing access to an encrypted traffic flow in a network to a client if the client is authenticated with the key.30. An article of manufacture according to claim 29, wherein the content to provide instruction to cause the machine to perform operations including provisioning the key through the embedded agents further comprises the content to provide instruction to cause the machine to perform operations including provisioning the key through an embedded agent having network access via a network link not visible to a host operating system (OS) running on the client.31. An article of manufacture according to claim 31, wherein the content to provide instruction to cause the machine to perform operations including providing access to the traffic flow if the client is authenticated comprises the content to provide instruction to cause the machine to perform operations including authenticating the client with the embedded agent over the network line not visible to the host OS.32. An article of manufacture according to claim 29, wherein the content to provide instruction to cause the machine to perform operations including providing access to the traffic flow further comprises the content to provide instruction to cause the machine to perform operations including providing multiple clients access with the key to nodes in the network, the nodes in the network to decrypt the traffic flow and subsequently encrypt the traffic flow to transmit the traffic to a next node in the network. 33. An article of manufacture according to claim 29, further comprising the content to provide instruction to cause the machine to perform operations including updating at a client the symmetric cryptographic key provisioned across the multiple clients through a public and private key exchange with a public and private key associated the client.34. An article of manufacture according to claim 29, wherein the content to provide instruction to cause the machine to perform operations including providing access if the client is authenticated further comprises the content to provide instruction to cause the machine to perform operations including: verifying with the embedded agent that a platform associated with the client is not compromised; and providing with the embedded agent the key and an assertion that the client is not compromised to a verification entity on the network.35. An article of manufacture according to claim 34, further comprising the content to provide instruction to cause the machine to perform operations including indicating with the embedded agent to a remote network device if the client is compromised. 36. An article of manufacture according to claim 34, further comprising the content to provide instruction to cause the machine to perform operations including foreclosing with the embedded agent network access to the client if the client is compromised.37. An article of manufacture according to claim 29, further comprising the content to provide instruction to cause the machine to perform operations including performing cryptographic functions on data with the key to authenticate data with the key.38. An article of manufacture according to claim 29, further comprising the content to provide instruction to cause the machine to perform operations including placing a derivative of the key in a header of data to be transmitted to authenticate the data with the key. |
METHOD , APPARATUSES AND COMPUTER PROGRAM PRODUCT FOR SHARING CRYPTOGRAPHIC KEY WITH AN EMBEDDED AGENT ON A NETWORK ENDPOINT IN A NETWORK DOMAINRELATED APPLICATION [0001] This Application is related to U.S. Patent Application No. TBD, entitled "Cooperative Embedded Agents," and filed concurrently herewith.FIELD [0002] Embodiments of the invention relate cryptography and specifically to sharing of a cryptographic key among multiple clients.BACKGROUND [0003] Current cryptographic techniques used for encryption of network traffic employ key distribution protocols capable of getting private keys to the endpoints desiring to engage in secure communication. Alternately, these private keys are distributed to the endpoints in advance of the secure communication by some other means (e.g., delivery service, in person, electronically, etc.). When an endpoint is a personal computing device, the keys are typically stored on a hard drive or other persistent storage device and are accessible to the operating system. This potentially makes the keys accessible to applications running on the operating system. Keys stored in this fashion can be accessed by an attacker who successfully compromises the operating system. [0004] In groups of networked endpoints, when one endpoint is compromised, the lack of security of the keys used for secure communication can potentially lead to compromise of other endpoints on the network. Another potentially more serious problem is the ability of the compromising agent (hacker, virus, etc.) to obtain the keys that may be later used to obtain and decrypt data from the secure communication channels. Thus, compromise of a system may lead to loss of cryptographic keys that could lead to loss of secure communication with those keys.[0005] Other problems associated with the keys associated with secure communication among endpoints in a network are potential difficulties with management and distribution. From a management standpoint, the storing and verifying of keys can become a difficult task as the number of endpoints in a network domain grows. Where a network device, such as a switch or firewall, may be able to manage keys for each client to which it is connected, as the number grows, the limited resources in terms of memory and computational resources of the network device may prevent the device from being able to manage keys for all connected endpoints. From a distribution standpoint, there may be difficulty in provisioning keys and keeping track of who has what keys, when keys should be changed, etc.BRIEF DESCRIPTION OF THE DRAWINGS [0006] The description of embodiments of the invention includes various illustrations by way of example, and not by way of limitation in the figures and accompanying drawings, in which like reference numerals refer to similar elements.[0007] Figure 1 is one embodiment of a block diagram of a network system with clients sharing a cryptographic key. [0008] Figure 2 is one embodiment of a block diagram of a client having a secure storage and an embedded agent.[0009] Figure 3 is one embodiment of a block diagram of a network endpoint device.[0010] Figure 4 is one embodiment of a block diagram of elements of a network endpoint device. [0011] Figure 5 is one embodiment of a flow diagram of accessing a traffic flow with a shared cryptographic key.[0012] Figure 6 is one embodiment of a block diagram of use of an infrastructure device with endpoints having embedded agents for sharing a cryptographic key.DETAILED DESCRIPTION[0013] Methods and apparatuses associated with sharing cryptographic keys among multiple network devices. A shared cryptographic key is provisioned to multiple devices to use in secure communication. Thus, each device will use the same shared key to engage in secure communication. In one embodiment a shared key is provisioned to clients of a virtual network. The private key identifies the client as a trusted device in the network, and enables the device to securely communicate with endpoints in the network. [0014] The shared cryptographic key is managed in a client by an embedded agent. The embedded agent operates independently of a platform on the client host device. A secure storage is used to store the key, and is accessible by the embedded agent, but not the host operating system. The shared key is thus kept secret from the host operating system. [0015] Figure 1 is one embodiment of a block diagram of a network system with clients sharing a cryptographic key. Virtual private group (NPG) 110 represents endpoints of a network that share a cryptographic key. As illustrated, client 120 and client 130 use a common cryptographic key to encrypt/decrypt secure data for communication over network 140 with server 150.[0016] Clients 120 and 130 include a combination of logic and processor(s). Some of the hardware may include embedded code (firmware) that is stored on and run on the hardware. Also, clients 120 and 130 include user interfaces allowing a user to interact with client 120 and/or client 130. Clients 120 and 130 will include an operating system (OS), that is the main code used to control the flow of execution and instruction on clients 120 and 130. The OS may include e.g., Windows® operating systems from Microsoft® Corporation, Linux, etc. The OS will typically be stored in a persistent storage (e.g., a hard drive) and initialized with boot-up of the client systems. The OS provides user interface to client 120 and/or 130, and allows an environment on which applications may be executed by the systems. The hardware, firmware, and software aspects of a client 120 or 130 are to be understood as being the platform of the client.[0017] Client 120 is shown with embedded agent (EA) 121, and client 130 is shown with EA 131. Embedded agents 121 and 131 represent embedded systems on clients 120 and 130, respectively, that receive and manage the shared key. In one embodiment embedded agents 121 and 131 are systems including embedded processors, a secure key storage, and a cryptographic agent. The cryptographic agent may be implemented in hardware or software running on a device in clients 120 or 130, or a combination of these. The cryptographic agent performs the actual authenticating of data for clients 120 and 130 with the shared key. Authenticating the data with the shared key may include, e.g., hashing the data to authenticate, or sign, it, encrypting the data with the key, placing a derivative of the key in a header associated with the data in transmission.[0018] Embedded agents 121 and 131 may be firmware that is run on a processor on the host system that is independent from the main processor or central processing unit (CPU) of the system. In one embodiment aspects of hardware/firmware that make up embedded agents 121 and 131 are integrated into the same die as a chip or chipset of the platform. [0019] Network 140 is intended to represent any type of network, and may include a variety of network hardware (e.g., routers, switches, firewalls, hubs, traffic monitors, etc.). Each hop 141-143 in network 140 represents one such device. Hop 1 141 may be considered to an aggregation point for network 140, because it aggregates the traffic incoming to network 140 from the clients of NPG 110. Note that while three hops are shown in Figure 1, hop 1 141, hop 2 142, and hop N 143, it is to be understood that there may be more of fewer hops that traffic will take across network 140 from NPG 110 to server 150. In one embodiment network 140 merely consists of the communication line between clients 120 and 130 and server 150; thus, there are zero "hops."[0020] Key distribution 160 represents a trusted network entity to store, distribute, and otherwise manage cryptographic keys for the endpoints and devices of network 140. In one embodiment key distribution 160 maintains a public and private key associated with each endpoint on network 140. Key distribution 160 operates to distribute the shared keys to all systems in a domain sharing cryptographic keys, such as NPG 110. For example, NPG 110 may be considered a network domain because it includes a group of clients associated with each other in one topographical view of network 140. A virtual private network (VPΝ) may be another example of a domain where a cryptographic key may be shared among multiple endpoints. [0021] In one embodiment key distribution 160 periodically updates the shared key. The periodicity of key changing is dependent on factors such as how susceptible to attempted attack or infection the domain is, the number of client systems in the domain, the burden of key management, etc. For example, key changing may occur once per hour, once daily, once weekly, etc. Key distribution 160 may initiate the updating of the shared key by indicating a change to clients 120 and 130. Alternatively, key distribution may update the key in association with a public/private key exchange with the clients. [0022] Figure 2 is one embodiment of a block diagram of a client having a secure storage and an embedded agent. Virtual private group (NPG) client 200 may be a client from a NPG as described in Figure 1. NPG client 200 includes a host processor 210 that is the main processor in the computational platform of client 200. When client 200 is operational, host processor 210 includes host OS 220 that generally controls the environment of client 200. Host OS 220 is shown with user application threads 221-222, which represent applications and/or threads of applications running on host processor 210. There may be fewer or more user application threads than that shown in Figure 2. [0023] Client 200 includes a platform chipset 230. The platform chipset may include memory hubs and/or controllers, input/output (I/O) hubs and/or controllers, memory subsystems, peripheral controllers, etc. Platform chipset 230 is coupled with host processor 210 by means of one or more communication buses. For example, a peripheral component interconnect (PCI) bus is one common bus in a PC. In alternate embodiments, host processor is coupled with platform chipset 230 by means of a proprietary bus.[0024] In one embodiment platform chipset 230 includes cryptographic (crypto) module 231. Cryptographic module 231 represents hardware (embedded chips, logic, etc.) and/or code running on platform 230 that provides cryptographic services for client 200. In one embodiment hardware cryptographic module 231 may include a Galois counter mode encryption module (GCM) to add another layer of encryption on top of enciphered data. An example algorithm that may be used by a GCM includes Advanced Encryption Standard (AES).[0025] Platform chipset 230 includes crypto engine 232, an embedded agent in client 200. Crypto engine 232 represents a cryptographic control system embedded on platform chipset 230. In one embodiment, crypto engine 232 includes an embedded processor or other computational device that has a direct connection to the network, as shown by communication channel 233. Communication channel 233 may represent one or multiple private communication channels. For example, in one embodiment crypto engine 232 represents multiple embedded agents on client 200, each with a private network access. In a case where multiple private communication channels are used, access may be arbitrated. [0026] Communication channel 233 may represent a channel over the same physical line as network link 234, but communication channel 233 is private to crypto engine 232, and is thus transparent to host processor 210. Thus, host processor 210 may have access to network link 234, but not to communication channel 233. In one embodiment communication channel 233 from crypto engine 232 to the network complies with the transport layer security (TLS) or the secure sockets link (SSL) protocols. Other protocols may include Internet Protocol Security (IPsec) and Wired Equivalent Privacy (WEP). Host processor 210 will have network access through network link 234, including secure communication access. [0027] In traditional systems the cryptographic keys used to encipher traffic intended for a secure network connection were accessible to host processor 210. In one embodiment crypto engine 232 provides the keys for enciphering traffic, and provides access to hardware encryption services. In this manner host processor 210 may not have access to the keys, even though host processor can request the encryption services through crypto engine 232.[0028] Client 200 also includes secure storage 240, which is accessible by platform chipset 230, but is independent of, and transparent to host processor 210. Secure storage 240 represents a combination of non- volatile storage (e.g., flash) with logic that prevents unauthorized access to the non- volatile storage. For example, secure storage 240 may be a trusted platform module (TPM).[0029] In one embodiment client 200 may include flash 250. Flash 250 represents a nonvolatile storage upon which data related to the security of client 200 may be stored. For example, in one embodiment an image of the host is stored that can be verified to make sure the system has not been compromised. The determination of whether the system has been compromised is performed by an agent on platform chipset 230 that is part of, or works in conjunction with crypto engine 232. In this way crypto engine 232 may determine whether the system is compromised before providing access of a compromised system to network link 234.[0030] Figure 3 is one embodiment of a block diagram of a network endpoint device having cooperative embedded agents. The block diagram of Figure 3 is intended to represent a broad category of electronic systems having network interfaces. The electronic system can be, for example, a desktop computer system, a mobile computer system, a server, a personal digital assistant (PDA), a cellular telephone, a set-top box, game console, satellite receiver, etc. [0031] In one embodiment, processor 310 may be coupled to memory controller hub 320 by front side bus 315. While the electronic system of Figure 3 is described as having a single processor, multiple processor embodiments can also be supported. In an alternate embodiment, processor 310 may be coupled with memory controller hub 320 by a shared system bus. Processor 310 can be any type of processor known in the art, for example, a processor from the Pentium® family of processors, the Itanium® family of processors, the Xeon® family of processors, available from Intel Corporation of Santa Clara, California. Other processors can also be used.[0032] Memory controller hub 320 may provide an interface to memory subsystem 125 that can include any type of memory to be used with the electronic system. Memory controller hub 320 may also be coupled with input/output (I/O) controller hub (ICH) 330. In one embodiment, ICH 330 may provide an interface between the system and peripheral I/O devices 380 as well as between the system and network interface 340, which may provide an interface to external network 390. Digital signal processor (DSP) 331 may also be coupled with ICH 330. Network 390 may be any type of network, whether wired or wireless, for example, a local area network or a wide area network. [0033] In one embodiment, ICH 330 may be coupled with secure memory structure 370, which may provide security and/or cryptographic functionality. In one embodiment, secure memory structure 370 may be implemented as a trusted platform module (TPM). Secure memory structure 370 may provide a secure identifier, for example, a cryptographic key in a secure manner to embedded agent 351.[0034] Embedded agent 351 represents an embedded module or modules, whether in hardware or firmware, with a private network connection transparent to host processor 310. In one embodiment embedded agent 351 may be considered to have at least two separate parts, that may be physically or merely logically separate. Embedded agent 351 may physically be separate from ICH 330. In another embodiment, embedded agent 351 is physically integrated with ICH 330.[0035] Embedded controller agent 150 may be coupled with ICH 130 and with network 190. The network connection for embedded controller 150 may be independent of the operation of the system and is independent of an operating system executed by processor 110. In one embodiment, embedded controller agent 150 may include a microcontroller or other type of processing circuitry, memory and interface logic. Embodiments of embedded agent 351 are described in greater detail below.[0036] In one embodiment, embedded controller agent 350 may be coupled with processor 310 via an interrupt interface. For example, embedded controller agent 350 may be coupled with the SMI pin of a Pentium® processor or with the PMI pin of an Itanium® processor (generically, xMI line 355). Other system interrupt signals may be used for other processors.[0037] ICH 330 may also be coupled with embedded firmware agent 360. In one embodiment, embedded firmware agent 360 may be a mechanism that enables executable content in the form of one or more software drivers to be loaded into a management mode of processor 310. Embedded agent 351 may be executed in a combination of hardware and/or software. The software may be transmitted to the system of Figure 3 by means of a machine accessible medium, which includes any mechanism that provides (i.e., stores and/or transmits) information in a form accessible by a machine (e.g., a computer, network device, personal digital assistant, manufacturing tool, any device with a set of one or more processors, etc.). For example, a machine accessible medium includes recordable/non- recordable media (e.g., read only memory (ROM), random access memory (RAM), magnetic disk storage media, optical storage media, flash memory devices; etc.), and electrical, optical, acoustical or other form of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.), etc.[0038] In one embodiment, embedded controller agent 350 may be coupled with embedded firmware agent 360 via agent bus 365. Agent bus 365 may be a bi-directional private bus between the elements of embedded agent 351. Because one or more aspects of embedded agent 351 may be firmware, agent bus 365 is to be understood as a logical, functional connection between embedded controller agent 350 and embedded firmware agent 360, and not necessarily a physical link. By communicating over agent bus 365, embedded controller agent 350 and embedded firmware agent 360 may be configured to provide manageability and/or security functionality to the system in a secure and convenient manner. [0039] In one embodiment, embedded controller agent 350 may provide an integrity check on the system for security purposes, for example, prior to establishing a secure or trusted connection with a remote device via network 390. Embedded controller agent may perform a virus scan of the system to determine whether communication with the remote device is safe and/or whether support is required from the remote device. Embedded firmware agent 360 may provide an operating system-independent, secure storage for use by embedded controller agent 350 in performing the integrity check. [0040] During operation, embedded controller agent 350 may perform periodic integrity checks to provide enhanced security as compared to a single integrity check. Embedded controller agent 350 can also perform integrity checks prior to communication with remote management devices.[0041] Figure 4 is one embodiment of a block diagram of elements of a network endpoint device. I/O controller hub (ICH) 410 represents I/O controller hardware on a computing device. ICH 410 may be a chip or chipset with the control logic and interfaces, together with any discrete components that may make up ICH 410. In one embodiment embedded agent 411 is integrated onto the hardware of ICH 410. If ICH 410 is a chipset, embedded agent 411 may be a chip in the chipset and/or firmware no a chip in the chipset. If ICH 410 is a single chip, embedded agent 411 may be a separate circuit integrated onto the ICH 410 chip, and may share I/O pins, or have dedicated I/O pins in the package, or be embedded firmware in a storage of the chip (e.g., read-only memory, flash) that is executed by the ICH 410 chip.[0042] In one embodiment embedded agent 411 includes an embedded firmware agent to participate in the distribution of cryptographic keys, and manage a shared key or keys. A shared key is a key that is shared among multiple clients as part of a virtual group. The ability of the clients in the virtual group to function as a virtual group and use a shared private key depends upon the distributed ability of each client to maintain the security of the shared key.[0043] To maintain the security of the shared key, embedded agent 411 has private network connectivity as represented by agent line 412. A private network connection refers to a connection that is not visible by and/or not accessible to a host operating system. To provide the best security, agent line 412 should be isolated from the central processor of the endpoint device. This is because the central processor may be subject to compromise from attack, and preventing the central processor direct access to agent line 412 will mean that even if an OS running on the central processor is compromised, the security of agent line 412 will likely not be compromised.[0044] To communicate on agent line 412, embedded agent 411 may utilize the shared cryptographic key. Thus, security of each client in the virtual group is ensured by the use of an embedded agent, such as embedded agent 411, that has a private network link inaccessible to the host processor over which the embedded agent may receive and distribute the shared key. The use of the shared key is thus transparent to the host processor, and will not be compromised by an attack on the host processor. [0045] Traditional systems have also been vulnerable in attacks because their cryptographic keys were stored in memory accessible to the OS or user applications. To ensure the security of the shared cryptographic key, embedded agent 411 interfaces with secure key storage (SKS) 421 located on the host platform. In one embodiment, SKS 421 is located on network interface 420. Network interface 420 represents a network card (e.g., network interface card (NIC)), or a network interface circuit integrated onto the hardware/firmware platform of the host computing device. Embedded agent 411 will receive a shared key to be used by each client in the virtual group to identify the client as a member of the virtual group. Embedded agent 411 passes the key to SKS 421 and causes the key to be stored.[0046] In another embodiment, SKS 421 resides on the platform hardware not on the network interface 420. For example, SKS 421 could be a separate chip on a main circuit board. In another example, SKS 421 could be integrated with embedded agent 411, such as by integrating the logic of embedded agent 411 and the memory and logic of SKS 421 on a single integrated circuit or system on a chip.[0047] The key exchange between SKS 421 and embedded agent 411, GCM 422, and/or other hardware in the system will typically be across a private bus, or a bus not generally accessible in a host system. Alternatively, the internal key exchange may take place with encryption across a more generally accessible system bus.[0048] In one embodiment network interface 420 also includes Galois counter mode encryption module (GCM) 422. In alternate embodiments other hardware encryption modules may be used. GCM 422 may be hardware embedded on the system, or software running on an embedded entity on the system. GCM 422 has secure access to SKS 421 as described above. GCM 422 may obtain the shared key from SKS 421 to perform cryptographic services on data intended for secure transmission on the network. [0049] Figure 5 is one embodiment of a flow diagram of accessing a traffic flow with a shared cryptographic key. A system that participates in a network with shared cryptographic key(s) will typically obtain and store a shared key for use in secure communication. A system according to embodiments of the invention described herein may have a key from boot-up of a host operating system running on the system. The system at some point requests to transmit over a secure communication link to an endpoint on the network, 502. The system includes hardware and/or firmware to provide secure access to and secure storage of a shared symmetric cryptographic key. This includes an embedded agent that maintains the cryptographic key(s).[0050] In one embodiment prior to a transmission in the virtual network, the embedded agent verifies security of the platform, 504. In alternate embodiments the security may be known beforehand from prior verification. In the shared key network, the security is dependent upon each client securing the shared key, and preventing the client computing device from transmitting over a secure link if the client is compromised. Thus, the embedded agent verifies the client platform, including the software running on client, to determine if the client has been compromised by e.g., a virus, worm, hacker attack, etc. [0051] In a system that uses a shared cryptographic key, the sharing of the key presents many advantages as far as management, and integration of the system with other network hardware. However, security of the shared key becomes significantly important. A compromise of a client that results in dissemination of the shared key would destroy trust in the security of all secure communication in the network among clients sharing the key. Thus, in one embodiment the integrity of the system platform is constantly monitored to verify that it is secure. Even if the platform is determined to be free from compromise and the system continues to perform other operations, monitoring of the system integrity may be continued in parallel with the other operations. Note that in parallel does not necessarily infer that a single system element is performing both the monitoring and the other operations. There may be different hardware, software, and/or firmware elements independently and/or concurrently performing the system operations and the monitoring functions.[0052] If the embedded agent determines that the platform has been compromised, 510, the embedded agent may perform security protection operations, 512. Security protection may include, but is not limited to, transmitting on the secure link to a network manager that the client has been compromised, causing execution of security software, causing the client to reboot, preventing the client from transmitting to the network on its network access ports, etc. These operations may be performed in combination as well as individually, or in a sequence.[0053] If the embedded agent determines that the platform has not been compromised, 510, the cryptographic services module (e.g., hardware, software) is provided access to obtain the shared key from a secure storage to perform encryption/decryption of data, 514. The cryptographic services are the provided with the shared key, 516. In the case of hardware encryption, a hardware module may obtain the key directly through a bus to the secured memory storing the shared key. The key is then used to perform the cryptographic services. In the case of software encryption, the software may make a call (e.g., application program interface (API)) to the embedded agent, which provides access to cryptographic services for the software. For example, access to services may be provided through interchange in a read/write area of system memory, and the shared key is not disclosed to the requesting OS or application(s). [0054] To communicate over the virtual network of which the client is a part, the client will provide authentication to identify itself to a verification module on the network, 518. For example, a client may provide authentication to a firewall that isolates the virtual network from the outside. In one embodiment the embedded agent provides authentication with a shared key to the verification module over the secure line the embedded agent has to the network. When authenticated, the client may be allowed to transmit, 520.[0055] Figure 6 is one embodiment of a block diagram of use of an infrastructure device with endpoints having embedded agents for sharing a cryptographic key. Endpoints 610- 611 desire to engage in secure communication, and will use enciphering/deciphering of data transmitted over a network connecting them. Endpoints 610-611 include embedded agents 620-621, respectively, and secure memory, illustrated as trusted platform modules(TPMs) 630-631, respectively. The operation of embedded agents 620-621 and TPMs630-631 is according to embodiments of these devices as discussed above.[0056] Endpoints 610-611 are shown interacting through infrastructure device 640. Infrastructure device may be, for example, a firewall, switching device with restricted access services, etc. Infrastructure device 640 provides security by allowing authenticated traffic 650 to pass through infrastructure device 640, and rejecting unauthenticated traffic 660. Authenticated traffic 650 is transmitted through "holes" 641 in infrastructure device 640 opened to authenticated traffic 650. [0057] To determine whether network data should be trusted (650) or untrusted (660), infrastructure device 640 includes verification engine 642. Verification engine 642 communicates through links 670-671 with embedded agents 620-621 of endpoints 610- 611, respectively. In one embodiment the verification information is in the fact that authenticated data 651 was hashed or cryptographically altered using the shared key. Also, the verification information may be in the fact that authenticated data 651 includes a header with the shared key or a derivative of the shared key. [0058] Endpoints 610-611 use shared symmetric cryptographic keys for engaging in secure communication. The shared keys are common to endpoints that are part of a virtual network of devices. Embedded agents 620-621 verify the identity of endpoints 610-611, respectively, as belonging to the virtual network by the use of the shared key. When the identity and security of endpoints 610-611 is verified, they may engage in communication. For example, endpoint 611 may transmit authenticated data 651 to endpoint 610. [0059] The infrastructure devices of a network may thus be easily used with groups that share private cryptographic keys. Note that links 670-671, while shown as separate from authenticated data 650 are not necessarily to be understood as referring to separate physical links from endpoints 610-611 to infrastructure device 640. Link 670, which is accessible only to embedded agent 620, may be a private communication channel over the same physical link that carries data on other channels accessible from elements of endpoint 610 that may be subject to compromise. While made in reference to endpoint 610 and secure link 670, the same description applies to endpoint 611 and its associated secure link 671.[0060] Reference herein to "embodiment" means that a particular feature, structure or characteristic described in connection with the described embodiment is included in at least one embodiment of the invention. Thus, the appearance of phrases such as "in one embodiment," or "in alternate an embodiment" may describe various embodiments of the invention, and may not necessarily all refer to the same embodiment. Besides what is described herein, it will be appreciated that various modifications may be made to embodiments of the invention without departing from their scope. Therefore, the illustrations and examples herein should be construed in an illustrative, and not a restrictive sense. The scope of the invention should be measured solely by reference to the claims that follow. |